From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
Even functions use only cosines (F(–x) = F(x)) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx) and F(x) sin(nx).
This video is to give you more examples of Fourier series. I'll start with a function that's odd. My odd function means that on the left side of 0, I get the negative of what I have on the right side of 0. F at minus x is minus f of x. And it's the sine function that's odd. The cosine function is even, and we will have no cosines here. All the integrals that involve cosines will tell us 0 for the coefficients AN. What we'll get is the B coefficients, the sine functions.
So you see that I chose a simple odd function, minus 1 or 1, which would give a square wave if I continue it on. It will go down, up, down, up in a square wave pattern. And I'm going to express that as a combination of sine functions, smooth waves. And here was the formula from last time for the coefficients bk, except now I'm only integrating over half, over the zero to pi part of the interval, so I double it. So that's an odd function, that's an odd function. When I multiply them, I have an even function. And the integral from minus pi to 0 is just the same as the integral from 0 to pi. So I'll do only 0 to pi and multiply by 2.
But my function on 0 to pi is 1. My nice square wave is just plus 1 there, so I'm just integrating sine kx dx. We can do this. It's minus cosine kx divided by k, right? That's the integral with the 2 over pi factor. Now I have to put in pi and 0 and put in the limits of integration and get the answer. So what do I get? I get 2 over pi.
For k equal 1, I think I get-- so k is 1, the denominator will be 1, and I think the numerator is 2. Yes, when k is 0, I get yeah. When k is 1, I get 2. When k is 2, so this is 4 over pi, I figured out as the first coefficient. The coefficient b1 is 4 over pi. The coefficient b2, now if I take k equal to 2, I have a 2 down below. But above, I have a 0 because the cosine of 2 pi is the same as the cosine of 0. When I subtract I get nothing, so that's 0.
Now I go to k equals 3. So the k equals 3 will come down here. And now when k is 3, it turns out I get-- they don't cancel, they reinforce. I get another 2. Good if you do these. And when k is 4, I get a 0 again. You see the pattern?
The pattern for the integrals is the k is going 1, 2, 3, 4, 5. This part gives me a 2 or a 0 or a 2 or a 0 in order. If you check that, you'll get it.
So I see that now for this function, which is better than the delta function also. It's not very smooth. It has jumps. It's a jump function, a step function. I see some decay, some slow decay, in the Fourier coefficients. This factor k is growing so the numbers are going to 0, but not very fast. Not very fast. Because my function is not very smooth.
So now you see-- so if I use those numbers, I'm saying that the square wave, this function, the minus 1 to 1 function, is equal to, let's see. I might as well take that 4 over pi times 1. So that's 1 sine x, 0, sine 2x's then 4 over pi sine 3x's, but with this guy there's a 3, 0 sine 4x's, sine 5x comes in over 5, and so on.
That's a kind of nice example. It turns out that we have just the odd frequencies 1, 3, 5 in the square wave and they're multiplied by 4 over pi and they're divided by the frequency, so that's the decay.
There is an odd function. Why don't I integrate that function? If I want to get an even function to show you an even example, I'll just integrate that square wave. When I integrate it square wave, it'll be even. Maybe I'll start the integral at 0, then it goes up at 1. And here the integral is negative, so it's coming down.
So you see it's a-- what am I going to call this function? Sort of a repeating ramp function. It's a ramp down and then up, down and then up. But of course from minus pi to pi, that's where I'm looking. I'm looking between minus pi and pi. And I see that function is even. And what does even mean? That means that my function at minus-- there is minus x-- is the same as the value at x.
And what that means for a Fourier series is cosine. Even functions only have cosine terms. And of course, since I've just integrated, I might as well just integrate that series. So this is this ramp, this repeating ramp function, is going to be 4 over pi. I could figure out the cosine coefficients, the a's, patiently. But why should I do that when I can just integrate?
So the integral of sine x will be minus is the integral of sine x, is minus cosine x, so I'll put the minus there, cosine x over 1 I guess. Now what's the integral of this? The integral of sine 3x is a cosine 3x over 3. And there's another 3 and there's a minus sign, which I've got. So I think it's cosine of 3x over 3 squared, because I have one 3 there and I get another 3 from the integration. And similarly here, when I integrate sine 5x I get cos 5x with a 5. And then I already had one 5, so 5 squared. So there you go.
There's something in freshman calculus which I totally forgot, the constant term. So there is a constant term, the average value, that a0. I've only found the a1, 2, 3, 4, 5. I haven't found the a0, and that would be the average of that. I don't know, what's the average of this function? Its goes from 0 up to pi and it seems like it's pretty-- I didn't draw it well, but half way. I think probably its average is about pi over 2, right? Let's hope that's right.
So let me sneak in the constant term here. The ramp is, I think I have a constant term is pi over 2. That's the average value. It would come from the formula and those-- well, what do you see now? That's the other example I wanted you to see. You see a faster drop off. 1, 9, 25, 49, whatever. It's dropping off with k squared.
And the reason it drops off faster than this one is that it's smoother. This function has corners. This function has jumps. So a jump is one level more rough, more word noisy than a ramp function. The smoother function has faster decay. Smooth-- let me write those words-- smooth function connects with faster decay. Faster drop off of the Fourier coefficient.
It means that the Fourier series is much more useful. Fourier series is really terrific for functions that are smooth because then you only need to keep a few terms. For functions that have jumps or delta functions, you have to keep many, many terms and the Fourier series calculation is much more difficult.
So that's the second example. Let's see, what more shall I say? We learned something about integrating and taking the derivative so let me end with just two basic rules. Two basic rules.
So the rule for derivatives. What's the Fourier series of df dx? And the second will be the rule for shift. What's the Fourier series for f of x minus a shift? You know that when I change x to x minus d, all that does is shift the graph by a distance d. That should do something nice to its Fourier coefficient.
So I'm starting with-- oh, I haven't given you any practice with a complex case. This would be a good time. Suppose start is f of x equals the sum of ck, a complex coefficient e to the ikx, the complex exponential. And you'll remember that sum went from minus infinity to infinity.
So I have a Fourier series. I'm imagining I know the coefficients and I want to say, what happens if I take the derivative? Well, just take the derivative. You'll have a sum of the derivative brings down a factor ik. So that's the rule. Simple, but important. That's why Fourier series is so great because you have orthogonality and then you have this simple rule with derivatives. it just brings a factor ik so the derivative make sure function noisier and you have larger coefficients.
And if I do f of x minus d, so I'll change x to x minus d, so I'll see the sum of ck e to the ikx, e to the minus ikd, right? I've put in x minus d instead of x. And here I see that the Fourier coefficient for a shifted function-- so the ck was a Fourier coefficient for f. When I shift f, it multiplies that coefficient by a phase change. The magnitude stayed the same because that's a number-- everybody recognizes that as a number of magnitude 1 and just has a phase shift. Those are two would rules that show why you can use Fourier series in differential equations and in difference equations. Thank you.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.