From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn, sin(x) and ex. The Fundamental Theorem of Calculus says that the integral inverts the derivative.
OK well, here we're at the beginning. And that I think it's worth thinking about what we know. Calculus. Differential equations is the big application of calculus, so it's kind of interesting to see what part of calculus, what information and what ideas from calculus, actually get used in differential equations. And I'm going to show you what I see, and it's not everything by any means, it's some basic ideas, but not all the details you learned. So I'm not saying forget all those, but just focus on what matters.
OK. So the calculus you need is my topic. And the first thing is, you really do need to know basic derivatives. The derivative of x to the n, the derivative of sine and cosine. Above all, the derivative of e to the x, which is e to the x. The derivative of e to the x is e to the x. That's the wonderful equation that is solved by e to the x. Dy dt equals y.
We'll have to do more with that. And then the inverse function related to the exponential is the logarithm. With that special derivative of 1/x. OK. But you know those. Secondly, out of those few specific facts, you can create the derivatives of an enormous array of functions using the key rules.
The derivative of f plus g is the derivative of f plus the derivative of g. Derivative is a linear operation. The product rule fg prime plus gf prime. The quotient rule. Who can remember that?
And above all, the chain rule. The derivative of this-- of that chain of functions, that composite function is the derivative of f with respect to g times the derivative of g with respect to x. That's really-- that it's chains of functions that really blow open the functions or we can deal with.
OK. And then the fundamental theorem. So the fundamental theorem involves the derivative and the integral. And it says that one is the inverse operation to the other. The derivative of the integral of a function is this.
Here is y and the integral goes from 0 to x I don't care what that dummy variable is. I can-- I'll change that dummy variable to t. Whatever. I don't care. To show the dummy variable.
The x is the limit of integration. I won't discuss that fundamental theorem, but it certainly is fundamental and I'll use it. Maybe that's better. I'll use the fundamental theorem right away.
So-- but remember what it says. It says that if you take a function, you integrate it, you take the derivative, you get the function back again. OK can I apply that to a really-- I see this as a key example in differential equations. And let me show you the function I have in mind. The function I have in mind, I'll call it y, is the interval from 0 to t.
So it's a function of t then, time, It's the integral of this, e to the t minus s. Some function. That's a remarkable formula for the solution to a basic differential equation.
So with this, that solves the equation dy dt equals y plus q of t. So when I see that equation and we'll see it again and we'll derive this formula, but now I want to just use the fundamental theorem of calculus to check the formula. What as we created-- as we derive the formula-- well it won't be wrong because our derivation will be good. But also, it would be nice, I just think if you plug that in, to that differential equation it's solved.
OK so I want to take the derivative of that. That's my job. And that's why I do it here because it uses all the rules. OK to take that derivative, I notice the t is appearing there in the usual place, and it's also inside the integral. But this is a simple function.
I can take e to the t-- I'm going to take e to the t out of the-- outside the integral. e to the t. So I have a function t times another function of t.
I'm going to use the product rule and show that the derivative of that product is one term will be y and the other term will be q. Can I just apply the product rule to this function that I've pulled out of a hat, but you'll see it again. OK so it's a product of this times this. So the derivative dy dt is-- the product rule says take the derivative of -- that is e to the --.
Plus, the first thing times the derivative of the second. Now I'm using the product rule. It just-- you have to notice that e to the t came twice because it is there and its derivative is the same. OK now, what's the derivative of that? Fundamental theorem of calculus.
We've integrated something, I want to take its derivative, so I get that something. I get e to the minus tq of t. That's the fundamental theorem. Are you good with that?
So let's just look and see what we have. First term was exactly y. Exactly what is above because when I took the derivative of the first guy, the f it didn't change it, so I still have y. What have I-- what do I have here? E to the t times e to the minus t is one.
So e to the t cancels e to the minus t and I'm left with q of t Just what I want. So the two terms from the product rule are the two terms in the differential equation. I just think as you saw the fundamental theorem was needed right there to find the derivative of what's in that box, is what's in those parentheses. I just like that the use of the fundamental theorem.
OK one more topic of calculus we need. And here we go. So it involves the tangent line to the graph. This tangent to the graph.
So it's a straight line and what we need is y of t plus delta t. That's taking any function, maybe you'd rather I just called the function f. A function at a point a little beyond t, is approximately the function at t plus the correction because it-- plus a delta f, right? A delta f.
And what's the delta f approximately? It's approximately delta t times the derivative at t. That-- there's a lot of symbols on that line, but it expresses the most basic fact of differential calculus. If I put that f of t on this side with a minus sign, then I have delta f. If I divide by that delta t, then the same rule is saying that this is approximately df dt.
That's a fundamental idea of calculus, that the derivative is quite close. At the point t-- the derivative at the point t is close to delta f divided by delta t. It changes over a short time interval. OK so that's the tangent line because it starts with that's the constant term. It's a function of delta t and that's the slope.
Just draw a picture. So I'm drawing a picture here. So let me draw a graph of-- oh there's the graph of e to the t. So it starts up with slope 1. Let me give it a little slope here.
OK the tangent line, and of course it comes down here Not below. So the tangent line is that line.
That's the tangent line. That's this approximation to f. And you see as I-- here is t equals 0 let's say. And here's t equal delta t. And you see if I take a big step, my line is far from the curve.
And we want to get closer. So the way to get closer is we have to take into account the bending. The curve is bending. What derivative tells us about bending?
That is delta t squared times the second derivative. One half. It turns out a one half shows in there. So this is the term that changes the tangent line, to a tangent parabola. It notices the bending at that point. The second derivative at that point.
So it curves up. It doesn't follow it perfectly, but as well-- much better than the other. So this is the line. Here is the parabola. And here is the function. The real one.
OK. I won't review the theory there that it pulls out that one half, but you could check it. Now finally, what if we want to do even better? Well we need to take into account the third derivative and then the fourth derivative and so on, and if we get all those derivatives then, all of them that means, we will be at the function because that's a nice function, e to the t. We can recreate that function from knowing its height, its slope, its bending and all the rest of the terms.
So there's a whole lot more-- Infinitely many terms. That one over two-- the good way to think of one over two, one half, is one over two factorial, two times one. Because this is one over n factorial, times t to the nth, pretty small, times the nth derivative of the function. And keep going.
That's called the Taylor series named after Taylor. Kind of frightening at first. It's frightening because it's got infinitely many terms. And the terms are getting a little more comp-- For most functions, you really don't want to compute the nth derivative.
For e to the t, I don't mind computing the nth derivative because it's still e to the t, but usually that's-- this isn't so practical. -- very practical. Tangent parabola, quite practical. Higher order terms, less-- much less practical.
But the formula is beautiful because you see the pattern, that's really what mathematics is about patterns, and here you're seeing the pattern in the higher, higher terms. They all fit that pattern and when you add up all the terms, if you have a nice function, then the approximation becomes perfect and you would have equality.
So to end this lecture, approximate to equal provided we have a nice function. And those are the best functions of mathematics and exponential is of course one of them. OK that's calculus. Well, part of calculus. Thank you.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.