From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
When the force is an impulse δ (t), the impulse response is g(t). When the force is f(t), the response is the “convolution” of f and g.
OK. This is one more thing to tell you about Laplace transforms, and introducing a new word, convolution. And so we're going to find our old formula in new language, a new way. But the formula is familiar. And the problem is our basic problem, second order, linear, constant coefficient with a forcing term.
And we know that the Laplace-- and I'll take zero boundary conditions. So that the Laplace transform is just s squared y, sy, and that's the transform of our equation. No problem.
OK, now I'll divide by that. So I move that as 1 over, and I call it G. So this G is 1 over s squared, plus Bs plus C. And that has the name transfer function. And then this is the transform of the forcing term.
OK. So here we have a nice formula for y of s, after I do that division. It's a product. The transform of the solution that we want is that transform times that transform. This is the transform of the impulse response. This is the transform of the right-hand side. Now I just have a Laplace transform question.
Suppose my transform is one function of s times another function of s, what is the inverse transform? What is the inverse transform? What function y of t gives me G times F? And I'm just going to answer that. The answer is the g and the f, those are the ones that give that. But I do not just multiply those. The new operation that gives the right answer is called convolution. And I'll use a star.
So right now I'm going to say what does that convolution mean. So this is a general question. If I have two functions multiplied together, then I want the inverse transform, then I take the separate inverse transforms, little g and little f, and I convolve them, I do convolution. And let me tell you what convolution is.
So convolution is-- here is the formula for convolution. It's an integral from 0 to t of one function-- maybe I better use capital T, better-- times the other function, integrated. That's what convolution is.
So what have I achieved here? The same old formula. The formula which we described way back at the beginning as inputs f, growth factors over the remaining time, g. Put all those together by integration. Put all the inputs with their growth factors. Integrate to put them all together. And that is y. So it's a familiar formula, with only a new word. But you see that I could jump to the answer, once I knew about the convolution formula, and I knew that this is the function whose transform its-- let me say again. Its transform is GF.
So if I multiply transforms, I convolve functions. And looking at it the other way, if I multiply functions I would convolve their transforms. So convolution grows the number of functions that we can deal with on Laplace transform. Because it tells us what to do with products, capital G capital F. Or it tells us what to do with little g little f.
So I'm almost through, because I don't plan to check. I could. But this isn't the right place. The book does it accurately. I don't plan to check that this statement is true that the transform of that one is that one. But it is true. But I do plan to do an example.
Now second degree gets a little messy. So let me do a first degree example. Example, I'll take the equation dy dt minus ay. That's our usual first degree differential equation. And I'll take e to the ct on the right-hand side. OK. I'm doing those, because I can take the transforms and check everything.
So let me transform both of those starting from 0. So the transform of that is s y of s, minus a y of s, equals, well I know the transform f of s. I know the transform of that is 1 over s minus c. So this is just, s minus a factors out. So well y of s is 1 over s minus a, and s minus c.
Again, this is the simplest differential equation with a forcing term that I could use as an example. So now I'm looking for what is y of T. I'm looking for y of T. And I'm now going to use the language of convolution.
This is the transform of e to the at. This is the transform of-- so you see I'm thinking of that as the transform of e to at, and the transform of e to the st. So there is one factor. And there's the other factor.
So according to the convolution formula, I can write down the inverse transform, the y of t I want as the integral. I'm just going to copy the convolution. But I know the functions for that. So it's an integral from 0 to t. What do I have? g of t minus t. What is the inverse transform of 1 over s minus a? It's e to the a t minus t. And what is the inverse transform of 1 over s minus c? e to the cT dT.
So I have used the-- I've just put in what I know in the convolution formula. And this should be the correct answer. And I can do this integral. And what do I get? Well, I'm pretty sure that I get e to the-- down below there will be a-- you see I'm going to combine those exponentials. So I'll have a c minus a. It comes out perfectly.
e to the ct, minus e to the at. That's the right answer. It's not only what the convolution formula tells me, it's what I know. So that example is a good one to show that when-- so I didn't use partial fractions. Normally I would separate this into partial fractions, and then I would recognize those two pieces of the answer.
I didn't do that this time. Instead of using partial fractions, the algebra, I used the convolution formula, and did the integral or almost did it. We can do it. And we get that answer.
OK. So the point of this video is simply to introduce the idea of a convolution, which is the quantity we need, the function we need, when the transform is a product of two transforms. OK. Thank you.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.