From the series: Differential Equations and Linear Algebra

*
Gilbert Strang, Massachusetts Institute of Technology (MIT)
*

A graph has *n* nodes connected by *m* edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.

OK. This video is a different direction. It will be about linear equations and not differential equations. A matrix is at the center of this video and it's called the incidence matrix. And that incidence matrix tells me everything about a graph.

Now, what do I mean by the word graph? I don't mean a graph of sine x or cosine x. The word graph is used in another way completely for some edges and some nodes. So I have some nodes. In this case 1, 2, 3, 4 nodes. That's my number n.

The number m is the number of edges that connect the nodes. So I have edge 1 connecting those nodes, edge 2, edge 3, 4, and 5. And I didn't put in an edge 6. A complete graph would have all possible edges, but a general graph can have some edges. Some pairs of nodes are connected others are not connected.

So now I want to create the matrix that shows me everything that's in that picture. Then I can work with the matrix and graphs. And their matrices are the number one application, number one model for so many applications, like the world wide web. The web might have-- every website would be a node and there would be an edge between two nodes if those websites are linked. So the world wide web is a giant graph.

Or the telephone company has a giant graph in which the nodes are the telephones, and there is an edge when a call is made from one phone to another, between two phones. So, nodes and edges. And our brain-- which is the great problem of the 21st century is to understand the graph that represents our brain, the connections of neurons in our thinking-- well, that's a tougher problem than we'll solve today.

Let me work with that graph and create the matrix. So the matrix has five rows coming from the five edges. Let me take the first edge. So the first edge, there's edge number 1, goes from node 1 to node 2. The nodes correspond to columns. So if I want an edge from node 1 to node 2, that edge 1 will go in row 1. So edge 1. First edge is connected to row 1.

So that edge goes from node 1 to node 2, so I put a minus 1 and a 1. And it doesn't touch nodes 3 and 4. That's edge 1. That's row 1. Now that tells me everything I see about edge 1.

Edge 2 goes from 1 to 3. So I'll put a minus 1, a 0, and a 1 in row 2 because row 2 comes from edge 2 and it goes from 1 to 3.

Edge 3 will give me row 3, from 2 to 3. So edge 3 giving me row 3, 2 to 3.

Edge 4 went from 1 to 4. So minus 1, nothing, nothing, 1. That tells me that edge 4 is going from node 1 to node 4. And finally, from node 2 to node 4 is the final row.

Do you see there the graph? Everything, all the information in this picture is now captured in that matrix. So we can work with the matrix. And what does a matrix do? It multiplies vectors. That's what a matrix does, it acts on vectors.

So what happens if I multiply that matrix by a vector? So now let me take out these edge numbers and do a multiplication. That matrix has four columns, it's a 5 by 4 matrix, m by n. 5 by 4. So it multiplies a vector with four components and those four components will come from the four nodes.

And maybe they represent voltages at the nodes. Let me think like an electrical engineer for a moment. So if there's my matrix, I imagine I have voltages, v1, v2, v3, v4, at the nodes. So there's a v1 voltage here, v2, a v3, and a v4, and where those voltages' currents will flow. So my unknowns are the voltages, the four voltages, and the five currents. That's what the engineer needs to know.

So first of all, when I multiply A times v, what do I get? Let me just do that multiplication. So that first row times that gives me v2 minus v1, right? The dot product of the row with the vector. The next one is v3 minus v1. Then I have a minus 1 there. It's a v3 minus a v2. Then I have a minus 1 and a 1. I think that's v4 minus v1. And finally, this dot product of that will give me a v4 minus v2.

So what am I seeing here? This is now A times v. I've done a multiplication by a vector of voltages. And what have I found? I found the differences in voltages, the voltage difference between one end of the edge and the other one. I have five edges and now I have five results and those are the voltage differences.

And what does a difference in voltage do if these are at different voltages, different potentials? Current flows. If they're at the same potential, no current flows, right? That's the fundamental driving equation of currents from voltages is the difference in the voltage. The difference in the potential drives the flow.

And now, how much flow? So now I'm looking for the flows. So can I call those w, for the flows. So I have a w2 is the flow on that edge. A w1 is a flow there. A w5, a w3, and a w4. My pair of unknowns-- and that's the beauty of this picture-- is the voltages v1 to v4 four at the nodes, and the currents, the flows, w1 to w5 on the five edges. And I've seen that Av gives me the voltage differences.

I'm going to briefly, briefly approach the fundamental laws of flow, of current flow, of flow in any network. We're talking about the most basic equation, I would almost say, of applied mathematics. Maybe I should say of discrete applied mathematics. By discrete I mean a graph without derivatives. I'm not seeing derivatives here, I'm just seeing matrices and vectors.

So I have to remember that incidence matrix, A-- let me write it down again. Av gave the voltage differences. And that's one part of my picture. Another part is what is the equation that finally brings it together? That if I have the currents-- so the v's were the voltages. Now, there's going to be an equation involving w, the currents.

This, what I'm going to write here, is going to be really important. It's going to be Kirchhoff's Current Law, KCL. And I just emphasized that there are two Hs in Kirchhoff's name. So Kirchhoff's Current Law says-- and pay attention-- it says that the total flow into a node equals the flow out. We're talking about equilibrium here.

So if current is traveling around my graph, my network, and it's a stable equilibrium here so that flow into node 1 equals flow out of node 1. And let me tell you what that equation is in terms of the matrix A.

This voltage difference is involved A and, beautifully, the Kirchhoff's Current Law involves A transpose. So A transpose now is 4 by 5. These are the flows, a vector with five components because I have five edges. And Kirchhoff's Current Law would say that's 0.

So between A and A transpose, the incidence matrix is leading me to the fundamental equilibrium condition for flow in a network. Now, one more law is needed. It has to connect voltage differences to flows, potentials to currents.

Do you know who created that law in electrical engineering? It was Ohm. So Ohm's Law, finally, Ohm's Law is edge by edge that the potential difference, the drop in potential, the potential forcing current is proportional to the current.

So voltage difference-- let me write it in words. Voltage difference-- voltage drop I could say-- between the ends or across a resistor is proportional to, and there is some resistance, some physical number comes in here. This is where the material we're working with comes in.

In Kirchhoff's Laws, those laws hold for a network before we even say what the network is made of. But now if our network is made of resistors or pipes or whatever we have, then this will be some conductance. So E equal IR, some resistance, times the flow, times the current flow, w.

So a difference in v's is some number, this is the physical constant that we have to measure in a lab to know how many ohms our resistor is. That equation is on each edge. So we have a bunch of equations and together they tell us the four voltages and the five currents.

And maybe I'll just make the main point here. The main point is that this matrix is crucial. A is crucial. A transpose is crucial. A gives voltage differences, it makes something happen. A transpose is the balance law, the balance or current balance at each node.

And you won't be surprised that when the whole thing is put together and we have a final equation to solve, we end up with A transpose and A. And that magic combination, A transpose A, is central to graph theory. It's called the graph Laplacian and has a name and a fame of its own. Thank you.

*dy/dt*
= *y, dy/dt*
= –*y, dy/dt*
= *2ty*
. The equation *dy/dt*
= *y*
**y*
is nonlinear.

*x ^{n}*
, sin(

*e ^{st}*
, from outside and exponential growth,

*t*
) produces an oscillating output with the same frequency ω (and a phase shift).

*q(s)*
by its growth factor and integrate those outputs*.*

*f =*
cos(ω*t*
) is the real part of the solution for *f = e ^{iωt}*
. That complex solution has magnitude

*e ^{-at}*
multiplies the differential equation, y’=ay+q, to give the derivative of

*–by ^{2}*
slows down growth and makes the equation nonlinear, the solution approaches a steady state

*t*
and the other in *y*
. The simplest is *dy/dt = y*
, when *dy/y*
equals *dt*
. Then ln(*y*
) = *t + C*
.

*f*
= cos(ω*t*
), the particular solution is *Y*
*cos(ω*t*
). But if the forcing frequency equals the natural frequency there is resonance.

*e ^{st}*
. The exponent

*g*
is the solution when the force is an impulse (a delta function). This also solves a null equation (no force) with a nonzero initial condition.

*y = G*
cos(ω*t –*
α). The damping ratio provides insight into the null solutions.

*L*
(inductance), *R*
(resistance), and *1/C*
(*C*
= capacitance).

*t*
, cosines/sines, exponentials), a particular solution has this same form.

*(at ^{2} + bt +c) e^{st}*
: substitute into the equation to find

*y _{1}*
and

*y(t)*
.

*s ^{2}Y*
and the algebra problem involves the transfer function

*(t)*
, the impulse response is *g(t)*
. When the force is *f(t)*
, the response is the “convolution” of *f*
and *g.*

*dy/dt = f(t,y)*
has an arrow with slope *f*
at each point *t, y*
. Arrows with the same slope lie along an isocline.

*(y, dy/dt)*
travels forever around an ellipse.

*y*
and *dy/dt*
. The matrix becomes a companion matrix.

*Y*
to the differential equation *y’ = f(y)*
. Near that *Y*
, the sign of *df/dy*
decides stability or instability.

*f(Y,Z)*
= 0 and *g(Y,Z)*
= 0. Near those constant solutions, the two linearized equations use the 2 by 2 matrix of partial derivatives of *f*
and *g*
.

*y’ = Ay*
are stable (solutions approach zero) when the trace of *A*
is negative and the determinant is positive.

*m*
by *n*
matrix *A*
has *n*
columns each in **R**
^{m}
. Capturing all combinations Av of these columns gives the column space – a subspace of **R**
* ^{m}*
.

*n*
nodes connected by *m*
edges (other edges can be missing). This is a useful model for the Internet, the brain, pipeline systems, and much more.

*A*
has a row for every edge, containing -1 and +1 to show the two nodes (two columns of *A*
) that are connected by that edge.

**x**
remain in the same direction when multiplied by the matrix (*A*
**x**
*=*
λ**x**
). An *n*
x *n*
matrix has *n*
eigenvalues.

*n*
independent eigenvectors. The diagonal matrix Λis the eigenvalue matrix.

*A*
= *V*
Λ*V ^{–1}*
also diagonalizes

*d*
**y**
*/dt = A*
**y**
contains solutions **y**
*= e ^{λt}*

**y**
*= e ^{At}*

*A*
and *B*
are “similar” if *B*
= *M ^{-1}AM*
for some matrix

*n*
perpendicular eigenvectors and *n*
real eigenvalues.

*d ^{2}y/dt^{2} + Sy =*
0 has

^{T}
Sv for every vector v. S = A^{T}
A is always positive definite if A has independent columns.

*A*
into an orthogonal matrix *U*
times a diagonal matrix Σ (the singular value) times another orthogonal matrix V^{T}
: rotation times stretch times rotation.

*y(0)*
and *dy/dt(0)*
to boundary conditions on *y(0)*
and *y(1)*
.

* ^{2}u/*
∂

*F(x)*
into a combination (infinite) of all basis functions cos(*nx)*
and sin(*nx)*
.

*F(–x) = F(x)*
) and odd functions use only sines. The coefficients *a _{n}*
and

*u*
(*r*
, θ) combines *r ^{n}*
cos(

*u*
/∂*t*
= ∂* ^{2}u*
/∂

* ^{2}u*
/∂