# 0708-1300/Class notes for Tuesday, November 13

Announcements go here

## Typed Notes

The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.

### First Hour

We begin with a review of last class. Since no one has typed up the notes for last class yet, I will do the review here.

Recall had an association ${\displaystyle M\rightarrow \Omega ^{k}(M)}$ which was the "k forms on M" which equaled ${\displaystyle \{w:\ p\in M\rightarrow A^{k}(T_{p}M)\}}$

where

${\displaystyle A^{k}(V):=\{w:\underbrace {V\times \ldots \times V} _{k\ times}\rightarrow \mathbb {R} \}}$ which is

1) Multilinear

2) Alternating

1) ${\displaystyle A^{k}(V)}$ is a vector space

2) there was a wedge product ${\displaystyle \wedge :A^{k}(V)\times A^{l}(V)\rightarrow A^{k+l}(V)}$ via ${\displaystyle \omega \wedge \lambda (v_{1},\ldots ,v_{k+l})=}$${\displaystyle {\frac {1}{k!l!}}\sum _{\sigma \in S_{k+l}}(-1)^{\sigma }\omega (v_{\sigma (1)},\ldots ,v_{\sigma (k)})\lambda (v_{\sigma (k+1)},\ldots ,v_{\sigma (k+l)})}$ that is

a) bilinear

b) associative

c) supercommutative, i.e., ${\displaystyle \omega \wedge \lambda =(-1)^{deg(w)deg(\lambda )}\lambda \wedge \omega }$

From these definitions we can define for ${\displaystyle \omega \in \Omega ^{k}(M)}$ and ${\displaystyle \lambda \in \Omega ^{l}(M)}$ that ${\displaystyle \omega \wedge \lambda \in \Omega ^{k+l}(M)}$ with the same properties as above.

Claim

If ${\displaystyle \omega _{1},\ldots ,\omega _{n}}$ is a basis of ${\displaystyle A^{1}(V)=V^{*}}$ then ${\displaystyle \{\omega _{I}=w_{i_{1}}\wedge \ldots \wedge \omega _{i_{k}}\ :\ I=(i_{1},\ldots ,i_{k})\ with\ 1\leq i_{1}<\ldots is a basis of ${\displaystyle A^{k}(V)}$ and ${\displaystyle dimA^{k}(V)=nCk}$

If ${\displaystyle \omega _{1},\ldots ,\omega _{n}\in \Omega ^{1}(M)}$ and ${\displaystyle w_{1}|_{p},\ldots ,\omega _{n}|_{p}}$ a basis of ${\displaystyle (T_{p}M)^{*}\ \forall p\in M}$ then any ${\displaystyle \lambda }$ can be written as ${\displaystyle \lambda =\sum _{I}a_{I}(p)\omega _{I}}$ where ${\displaystyle a_{I}:M\rightarrow \mathbb {R} }$ are smooth.

The equivalence of these is left as an exercise.

Example

Let us investigate ${\displaystyle \Omega ^{*}(\mathbb {R} ^{3})}$ (the * just means "anything").

Now, ${\displaystyle (T_{p}(\mathbb {R} ^{3}))^{*}=}$ where ${\displaystyle x_{i}:\mathbb {R} ^{3}\rightarrow \mathbb {R} }$ and so ${\displaystyle dx_{i}|_{p}:T_{p}\mathbb {R} ^{3}\rightarrow T_{x_{i}(p)}\mathbb {R} =\mathbb {R} }$

Hence, ${\displaystyle dx_{i}|_{p}\in (T_{p}\mathbb {R} ^{3})^{*}}$

Now, ${\displaystyle dx_{j}({\frac {\partial }{\partial x}})={\frac {\partial }{\partial x_{i}}}x_{j}=\delta _{ij}}$ and hence we get a basis.

So, ${\displaystyle \Omega ^{1}(\mathbb {R} ^{3})=\{g_{1}dx_{1}+g_{2}dx_{2}+g_{3}dx_{3}\}\approx \{g_{1},g_{2},g_{3}\}\approx }$ {vector fields on ${\displaystyle \mathbb {R} ^{3}}$}

where the ${\displaystyle g_{i}:\mathbb {R} ^{3}\rightarrow \mathbb {R} }$ are smooth.

${\displaystyle \Omega ^{0}(\mathbb {R} ^{3})=\{f:\mathbb {R} ^{3}\rightarrow \mathbb {R} \}}$

This is because to each point p we associate something that takes zero copies of the tangent space into the real numbers. Thus to each p we associate a number.

${\displaystyle \Omega ^{3}(\mathbb {R} ^{3})=\{kdx_{1}\wedge dx_{2}\wedge dx_{3}\}\approx }$ {functions} where again the k is just a smooth function from ${\displaystyle \mathbb {R} ^{3}}$ to ${\displaystyle \mathbb {R} }$.

${\displaystyle \Omega ^{2}(\mathbb {R} ^{3})=\{h_{1}dx_{2}\wedge dx_{3}+h_{2}dx_{3}\wedge dx_{1}+h_{3}dx_{1}\wedge dx_{2}\}\approx \{h_{1},h_{2},h_{3}\}\approx }$ {vector fields}

Aside

Recall our earlier discussion of how points and things like points (curves, equivalence classes of curves) pushfoward while things dual to points (functions) pullback and that things dual to functions (such as derivations) push forward. See earlier for the precise definitions.

Now differential forms pull back, i.e., for ${\displaystyle \phi :M\rightarrow N}$ then ${\displaystyle \phi ^{*}(\lambda )\in \Omega ^{k}(M)\leftarrow \lambda \in \Omega ^{k}(N)}$ via ${\displaystyle \phi ^{*}(\lambda )(v_{1},\ldots ,v_{k})=\lambda (\phi _{*}v_{1},\ldots \phi _{*}v_{k})}$

The pullback preserves all the properties discussed above and is well defined. In particular, it is compatible with the wedge product via ${\displaystyle \phi ^{*}(\omega \wedge \lambda )=\phi ^{*}(\omega )\wedge \phi ^{*}(\lambda )}$

Theorem-Definition

Given M, ${\displaystyle \exists }$ ! linear map ${\displaystyle d:\Omega ^{k}(M)\rightarrow \Omega ^{k+1}(M)}$ satisfies

1) If ${\displaystyle f\in \Omega ^{0}(M)}$ then ${\displaystyle df(X)=X(f)}$ for ${\displaystyle X\in TM}$

2) ${\displaystyle d^{2}=0}$. I.e. if ${\displaystyle d_{k}:\Omega ^{k}\rightarrow \Omega ^{k+1}}$ and ${\displaystyle d_{k+1}:\Omega ^{k+1}\rightarrow \Omega ^{k+2}}$ then ${\displaystyle d_{k+1}\circ d_{k}=0}$.

3) ${\displaystyle d(\omega \wedge \lambda )=d\omega \wedge \lambda +(-1)^{deg\omega }\omega \wedge d\lambda }$

### Second Hour

Some notes about the above definition:

1) When we restrict our d to functions we just get the old meaning for d.

2) Philosophically, there is a duality between differential forms and manifolds and that duality is given by integration. In this duality, d is the adjoint of the boundary operator on manifolds. For manifolds, the boundary of the boundary is empty and hence it is reasonable that ${\displaystyle d^{2}=0}$ on differential forms.

3) To remember the formula in 3 given above and others like it, it helps to keep in mind what objects are "odd" and what are "even" and thus when commuting such operators we will get the signs as you would expect from multiplying objects that are either odd or even.

Example

Let us aim for a formula for d on ${\displaystyle \Omega ^{*}(\mathbb {R} ^{n})}$.

Lets compute ${\displaystyle d(\sum _{I}f_{I}dx_{I})}$ where ${\displaystyle dx_{I}=dx_{i_{1}}\wedge \ldots \wedge dx_{i_{k}}}$ and ${\displaystyle I=(i_{1},\ldots ,i_{k})}$

Then, ${\displaystyle d(\sum _{I}f_{I}dx_{I})=\sum _{I}d(f_{I}\wedge dx_{i_{1}}\wedge \ldots \wedge dx_{i_{k}})=\sum _{I}df_{I}\wedge (dx_{I})+f_{I}\wedge d(dx_{I})}$

The last term vanishes because of (2) in the theorem (proving uniqueness!)

Now, as an aside, we claim that for ${\displaystyle f\in \Omega ^{0}(M),\ df=\sum _{j=1}^{n}{\frac {\partial f}{\partial x_{j}}}dx_{j}}$

Indeed, we know ${\displaystyle (df)({\frac {\partial }{\partial x_{i}}})={\frac {\partial }{\partial x_{i}}}f}$

However, ${\displaystyle (\sum _{j=1}^{n}{\frac {\partial f}{\partial x_{j}}}dx_{j})({\frac {\partial }{\partial x_{i}}})=\sum _{j}{\frac {\partial f}{\partial x_{j}}}\delta _{ij}={\frac {\partial f}{\partial x_{i}}}}$ which is the same.

Returning, we thus get ${\displaystyle d(\sum _{I}f_{I}dx_{I})=\sum _{I,j}{\frac {\partial f_{I}}{\partial x_{j}}}dx_{j}\wedge dx_{I}}$

Thus our d takes functions to vector fields by ${\displaystyle f\mapsto ({\frac {\partial f}{\partial x_{1}}},{\frac {\partial f}{\partial x_{2}}},{\frac {\partial f}{\partial x_{3}}})}$

This is just the grad operator from calculus and we can see that the d operator appropriately takes things from ${\displaystyle \Omega ^{0}(M)}$ to ${\displaystyle \Omega ^{1}(M)}$.

Now let us compute ${\displaystyle d(h_{1}dx_{2}\wedge dx_{3}+h_{2}dx_{3}\wedge dx_{1}+h_{3}dx_{1}\wedge dx_{2})={\frac {\partial h_{1}}{\partial x_{1}}}dx_{1}\wedge dx_{2}\wedge dx_{3}+{\frac {\partial h_{1}}{\partial x_{2}}}dx_{2}\wedge dx_{2}\wedge dx_{3}+{\frac {\partial h_{1}}{\partial x_{3}}}dx_{3}\wedge dx_{2}\wedge dx_{3}}$ + 6 more terms representing the 3 partials of each of the last 2 terms.

As each ${\displaystyle dx_{i}\wedge dx_{i}}$ term vanishes we are left with just,

${\displaystyle =({\frac {\partial h_{1}}{\partial x_{1}}}+{\frac {\partial h_{2}}{\partial x_{2}}}+{\frac {\partial h_{3}}{\partial x_{3}}})dx_{1}\wedge dx_{2}\wedge dx_{3}}$

I.e., d takes ${\displaystyle (h_{1},h_{2},h_{3})\mapsto \sum _{i}{\frac {\partial h_{i}}{\partial x_{i}}}}$

this is just the div operator from calculus and appropriately takes vector fields to functions and represents the d from ${\displaystyle \Omega ^{2}(M)}$ to ${\displaystyle \Omega ^{3}(M)}$.

We are left with computing d from ${\displaystyle \Omega ^{1}(M)}$ to ${\displaystyle \Omega ^{2}(M)}$

Computing, ${\displaystyle d(g_{1}dx_{1}+g_{2}dx_{2}+g_{3}dx_{3})=({\frac {\partial g_{3}}{\partial x_{2}}}-{\frac {\partial g_{2}}{\partial x_{3}}})dx_{2}\wedge dx_{3}+({\frac {\partial g_{1}}{\partial x_{3}}}-{\frac {\partial g_{3}}{\partial x_{1}}})dx_{3}\wedge dx_{1}+({\frac {\partial g_{2}}{\partial x_{1}}}-{\frac {\partial g_{1}}{\partial x_{2}}})dx_{1}\wedge dx_{2}}$

I.e., we just have the curl operator.

Note that the well known calculus laws that curl grad = 0 and div curl = 0 are just the expression that ${\displaystyle d^{2}=0}$.

To provide some physical insight to the meanings of these operators:

1) The gradient represents the direction of maximum descent. I.e. if you had a function on the plane the graph would look like the surface of a mountain range and the direction that water would run would be the gradient.

2) In a say compressible fluid, the divergence corresponds to the difference between in the inflow and outflow of fluid in some small epsilon box around a point.

3) The curl corresponds to the rotation vector for a ball. Ie consider a ball (of equal density to the liquid about it) going down a river. In the ${\displaystyle x_{2}}$, ${\displaystyle x_{1}}$ plane the tenancy for it to rotate clockwise would be given by ${\displaystyle {\frac {\partial g_{2}}{\partial x_{1}}}-{\frac {\partial g_{1}}{\partial x_{2}}}}$