From the series: Differential Equations and Linear Algebra
Gilbert Strang, Massachusetts Institute of Technology (MIT)
A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
OK. This video is a different direction. It will be about linear equations and not differential equations. A matrix is at the center of this video and it's called the incidence matrix. And that incidence matrix tells me everything about a graph.
Now, what do I mean by the word graph? I don't mean a graph of sine x or cosine x. The word graph is used in another way completely for some edges and some nodes. So I have some nodes. In this case 1, 2, 3, 4 nodes. That's my number n.
The number m is the number of edges that connect the nodes. So I have edge 1 connecting those nodes, edge 2, edge 3, 4, and 5. And I didn't put in an edge 6. A complete graph would have all possible edges, but a general graph can have some edges. Some pairs of nodes are connected others are not connected.
So now I want to create the matrix that shows me everything that's in that picture. Then I can work with the matrix and graphs. And their matrices are the number one application, number one model for so many applications, like the world wide web. The web might have-- every website would be a node and there would be an edge between two nodes if those websites are linked. So the world wide web is a giant graph.
Or the telephone company has a giant graph in which the nodes are the telephones, and there is an edge when a call is made from one phone to another, between two phones. So, nodes and edges. And our brain-- which is the great problem of the 21st century is to understand the graph that represents our brain, the connections of neurons in our thinking-- well, that's a tougher problem than we'll solve today.
Let me work with that graph and create the matrix. So the matrix has five rows coming from the five edges. Let me take the first edge. So the first edge, there's edge number 1, goes from node 1 to node 2. The nodes correspond to columns. So if I want an edge from node 1 to node 2, that edge 1 will go in row 1. So edge 1. First edge is connected to row 1.
So that edge goes from node 1 to node 2, so I put a minus 1 and a 1. And it doesn't touch nodes 3 and 4. That's edge 1. That's row 1. Now that tells me everything I see about edge 1.
Edge 2 goes from 1 to 3. So I'll put a minus 1, a 0, and a 1 in row 2 because row 2 comes from edge 2 and it goes from 1 to 3.
Edge 3 will give me row 3, from 2 to 3. So edge 3 giving me row 3, 2 to 3.
Edge 4 went from 1 to 4. So minus 1, nothing, nothing, 1. That tells me that edge 4 is going from node 1 to node 4. And finally, from node 2 to node 4 is the final row.
Do you see there the graph? Everything, all the information in this picture is now captured in that matrix. So we can work with the matrix. And what does a matrix do? It multiplies vectors. That's what a matrix does, it acts on vectors.
So what happens if I multiply that matrix by a vector? So now let me take out these edge numbers and do a multiplication. That matrix has four columns, it's a 5 by 4 matrix, m by n. 5 by 4. So it multiplies a vector with four components and those four components will come from the four nodes.
And maybe they represent voltages at the nodes. Let me think like an electrical engineer for a moment. So if there's my matrix, I imagine I have voltages, v1, v2, v3, v4, at the nodes. So there's a v1 voltage here, v2, a v3, and a v4, and where those voltages' currents will flow. So my unknowns are the voltages, the four voltages, and the five currents. That's what the engineer needs to know.
So first of all, when I multiply A times v, what do I get? Let me just do that multiplication. So that first row times that gives me v2 minus v1, right? The dot product of the row with the vector. The next one is v3 minus v1. Then I have a minus 1 there. It's a v3 minus a v2. Then I have a minus 1 and a 1. I think that's v4 minus v1. And finally, this dot product of that will give me a v4 minus v2.
So what am I seeing here? This is now A times v. I've done a multiplication by a vector of voltages. And what have I found? I found the differences in voltages, the voltage difference between one end of the edge and the other one. I have five edges and now I have five results and those are the voltage differences.
And what does a difference in voltage do if these are at different voltages, different potentials? Current flows. If they're at the same potential, no current flows, right? That's the fundamental driving equation of currents from voltages is the difference in the voltage. The difference in the potential drives the flow.
And now, how much flow? So now I'm looking for the flows. So can I call those w, for the flows. So I have a w2 is the flow on that edge. A w1 is a flow there. A w5, a w3, and a w4. My pair of unknowns-- and that's the beauty of this picture-- is the voltages v1 to v4 four at the nodes, and the currents, the flows, w1 to w5 on the five edges. And I've seen that Av gives me the voltage differences.
I'm going to briefly, briefly approach the fundamental laws of flow, of current flow, of flow in any network. We're talking about the most basic equation, I would almost say, of applied mathematics. Maybe I should say of discrete applied mathematics. By discrete I mean a graph without derivatives. I'm not seeing derivatives here, I'm just seeing matrices and vectors.
So I have to remember that incidence matrix, A-- let me write it down again. Av gave the voltage differences. And that's one part of my picture. Another part is what is the equation that finally brings it together? That if I have the currents-- so the v's were the voltages. Now, there's going to be an equation involving w, the currents.
This, what I'm going to write here, is going to be really important. It's going to be Kirchhoff's Current Law, KCL. And I just emphasized that there are two Hs in Kirchhoff's name. So Kirchhoff's Current Law says-- and pay attention-- it says that the total flow into a node equals the flow out. We're talking about equilibrium here.
So if current is traveling around my graph, my network, and it's a stable equilibrium here so that flow into node 1 equals flow out of node 1. And let me tell you what that equation is in terms of the matrix A.
This voltage difference is involved A and, beautifully, the Kirchhoff's Current Law involves A transpose. So A transpose now is 4 by 5. These are the flows, a vector with five components because I have five edges. And Kirchhoff's Current Law would say that's 0.
So between A and A transpose, the incidence matrix is leading me to the fundamental equilibrium condition for flow in a network. Now, one more law is needed. It has to connect voltage differences to flows, potentials to currents.
Do you know who created that law in electrical engineering? It was Ohm. So Ohm's Law, finally, Ohm's Law is edge by edge that the potential difference, the drop in potential, the potential forcing current is proportional to the current.
So voltage difference-- let me write it in words. Voltage difference-- voltage drop I could say-- between the ends or across a resistor is proportional to, and there is some resistance, some physical number comes in here. This is where the material we're working with comes in.
In Kirchhoff's Laws, those laws hold for a network before we even say what the network is made of. But now if our network is made of resistors or pipes or whatever we have, then this will be some conductance. So E equal IR, some resistance, times the flow, times the current flow, w.
So a difference in v's is some number, this is the physical constant that we have to measure in a lab to know how many ohms our resistor is. That equation is on each edge. So we have a bunch of equations and together they tell us the four voltages and the five currents.
And maybe I'll just make the main point here. The main point is that this matrix is crucial. A is crucial. A transpose is crucial. A gives voltage differences, it makes something happen. A transpose is the balance law, the balance or current balance at each node.
And you won't be surprised that when the whole thing is put together and we have a final equation to solve, we end up with A transpose and A. And that magic combination, A transpose A, is central to graph theory. It's called the graph Laplacian and has a name and a fame of its own. Thank you.
1.1: Overview of Differential Equations Linear equations include dy/dt = y, dy/dt = –y, dy/dt = 2ty . The equation dy/dt = y *y is nonlinear.
1.2: The Calculus You Need The sum rule, product rule, and chain rule produce new derivatives from the derivatives of xn , sin(x ) and ex . The Fundamental Theorem of Calculus says that the integral inverts the derivative.
1.4b: Response to Exponential Input, exp(s*t) With exponential input, est , from outside and exponential growth, eat , from inside, the solution, y(t), is a combination of two exponentials.
1.4c: Response to Oscillating Input, cos(w*t) An oscillating input cos(ωt ) produces an oscillating output with the same frequency ω (and a phase shift).
1.4d: Solution for Any Input, q(t) To solve a linear first order equation, multiply each input q(s) by its growth factor and integrate those outputs.
1.4e: Step Function and Delta Function A unit step function jumps from 0 to 1. Its slope is a delta function: zero everywhere except infinite at the jump.
1.5: Response to Complex Exponential, exp(i*w*t) = cos(w*t)+i*sin(w*t) For linear equations, the solution for f = cos(ωt ) is the real part of the solution for f = eiωt . That complex solution has magnitude G (the gain).
1.6: Integrating Factor for a Constant Rate, a The integrating factor e-at multiplies the differential equation, y’=ay+q, to give the derivative of e-at y: ready for integration.
1.6b: Integrating Factor for a Varying Rate, a(t) The integral of a varying interest rate provides the exponent in the growing solution (the bank balance).
1.7: The Logistic Equation When –by2 slows down growth and makes the equation nonlinear, the solution approaches a steady state y( ∞) = a/b.
1.7c: The Stability and Instability of Steady States Steady state solutions can be stable or unstable – a simple test decides.
2.1: Second Order Equations For the oscillation equation with no damping and no forcing, all solutions share the same natural frequency.
2.1b: Forced Harmonic Motion With forcing f = cos(ωt ), the particular solution is Y *cos(ωt ). But if the forcing frequency equals the natural frequency there is resonance.
2.3: Unforced Damped Motion With constant coefficients in a differential equation, the basic solutions are exponentials est . The exponent s solves a simple equation such as As2 + Bs + C = 0 .
2.3c: Impulse Response and Step Response The impulse response g is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.
2.4: Exponential Response - Possible Resonance Resonance occurs when the natural frequency matches the forcing frequency — equal exponents from inside and outside.
2.4b: Second Order Equations With Damping A damped forced equation has a particular solution y = G cos(ωt – α). The damping ratio provides insight into the null solutions.
2.5: Electrical Networks: Voltages and Currents Current flowing around an RLC loop solves a linear equation with coefficients L (inductance), R (resistance), and 1/C (C = capacitance).
2.6: Methods of Undetermined Coefficients With constant coefficients and special forcing terms (powers of t , cosines/sines, exponentials), a particular solution has this same form.
2.6b: An Example of Method of Undetermined Coefficients This method is also successful for forces and solutions such as (at2 + bt +c) est : substitute into the equation to find a, b, c .
2.6c: Variations of Parameters Combine null solutions y1 and y2 with coefficients c1(t) and c2(t) to find a particular solution for any f(t).
2.7: Laplace Transform: First Order Equation Transform each term in the linear differential equation to create an algebra problem. You can then transform the algebra solution back to the ODE solution, y(t) .
2.7b: Laplace Transform: Second Order Equation The second derivative transforms to s2Y and the algebra problem involves the transfer function 1/ (As2 + Bs +C).
3.1: Pictures of the Solutions The direction field for dy/dt = f(t,y) has an arrow with slope f at each point t, y . Arrows with the same slope lie along an isocline.
3.2: Phase Plane Pictures: Source, Sink Saddle Solutions to second order equations can approach infinity or zero. Saddle points contain a positive and also a negative exponent or eigenvalue.
3.2b: Phase Plane Pictures: Spirals and Centers Imaginary exponents with pure oscillation provide a “center” in the phase plane. The point (y, dy/dt) travels forever around an ellipse.
3.2c: Two First Order Equations: Stability A second order equation gives two first order equations for y and dy/dt . The matrix becomes a companion matrix.
3.3: Linearization at Critical Points A critical point is a constant solution Y to the differential equation y’ = f(y) . Near that Y , the sign of df/dy decides stability or instability.
3.3b: Linearization of y'=f(y,z) and z'=g(y,z) With two equations, a critical point has f(Y,Z) = 0 and g(Y,Z) = 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of f and g .
3.3c: Eigenvalues and Stability: 2 by 2 Matrix, A Two equations y’ = Ay are stable (solutions approach zero) when the trace of A is negative and the determinant is positive.
5.1: The Column Space of a Matrix, A An m by n matrix A has n columns each in R m . Capturing all combinations Av of these columns gives the column space – a subspace of R m .
5.4: Independence, Basis, and Dimension Vectors v 1 to v d are a basis for a subspace if their combinations span the whole subspace and are independent: no basis vector is a combination of the others. Dimension d = number of basis vectors.
5.5: The Big Picture of Linear Algebra A matrix produces four subspaces – column space, row space (same dimension), the space of vectors perpendicular to all rows (the nullspace), and the space of vectors perpendicular to all columns.
5.6: Graphs A graph has n nodes connected by m edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.
6.1: Eigenvalues and Eigenvectors The eigenvectors x remain in the same direction when multiplied by the matrix (A x = λx ). An n x n matrix has n eigenvalues.
6.2: Diagonalizing a Matrix A matrix can be diagonalized if it has n independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.
6.3: Solving Linear Systems d y /dt = A y contains solutions y = eλt x where λ and x are an eigenvalue / eigenvector pair for A .
6.4: The Matrix Exponential, exp(A*t) The shortest form of the solution uses the matrix exponential y = eAt y (0) . The matrix eAt has eigenvalues eλt and the eigenvectors of A.
6.4b: Similar Matrices, A and B=M^(-1)*A*M A and B are “similar” if B = M-1AM for some matrix M . B then has the same eigenvalues as A .
6.5: Symmetric Matrices, Real Eigenvalues, Orthogonal Eigenvectors Symmetric matrices have n perpendicular eigenvectors and n real eigenvalues.
7.2: Positive Definite Matrices, S=A'*A A positive definite matrix S has positive eigenvalues, positive pivots, positive determinants, and positive energy vT Sv for every vector v. S = AT A is always positive definite if A has independent columns.
7.2b: Singular Value Decomposition, SVD The SVD factors each matrix A into an orthogonal matrix U times a diagonal matrix Σ (the singular value) times another orthogonal matrix VT : rotation times stretch times rotation.
7.3: Boundary Conditions Replace Initial Conditions A second order equation can change its initial conditions on y(0) and dy/dt(0) to boundary conditions on y(0) and y(1) .
8.1: Fourier Series A Fourier series separates a periodic function F(x) into a combination (infinite) of all basis functions cos(nx) and sin(nx) .
8.1b: Examples of Fourier Series Even functions use only cosines (F(–x) = F(x) ) and odd functions use only sines. The coefficients an and bn come from integrals of F(x) cos(nx ) and F(x) sin(nx ).
8.1c: Fourier Series Solution of Laplace's Equation Inside a circle, the solution u (r , θ) combines rn cos(n θ) and rn sin(n θ). The boundary solution combines all entries in a Fourier series to match the boundary conditions.
8.3: Heat Equation The heat equation ∂u /∂t = ∂2u /∂x2 starts from a temperature distribution u at t = 0 and follows it for t > 0 as it quickly becomes smooth.