Add your name / see who's in!

#

Week of...

Links

Fall Semester

1

Sep 10

About, Tue, Thu

2

Sep 17

Tue, HW1, Thu

3

Sep 24

Tue, Photo, Thu

4

Oct 1

Questionnaire, Tue, HW2, Thu

5

Oct 8

Thanksgiving, Tue, Thu

6

Oct 15

Tue, HW3, Thu

7

Oct 22

Tue, Thu

8

Oct 29

Tue, HW4, Thu, Hilbert sphere

9

Nov 5

Tue,Thu, TE1

10

Nov 12

Tue, Thu

11

Nov 19

Tue, Thu, HW5

12

Nov 26

Tue, Thu

13

Dec 3

Tue, Thu, HW6

Spring Semester

14

Jan 7

Tue, Thu, HW7

15

Jan 14

Tue, Thu

16

Jan 21

Tue, Thu, HW8

17

Jan 28

Tue, Thu

18

Feb 4

Tue

19

Feb 11

TE2, Tue, HW9, Thu, Feb 17: last chance to drop class

R

Feb 18

Reading week

20

Feb 25

Tue, Thu, HW10

21

Mar 3

Tue, Thu

22

Mar 10

Tue, Thu, HW11

23

Mar 17

Tue, Thu

24

Mar 24

Tue, HW12, Thu

25

Mar 31

Referendum,Tue, Thu

26

Apr 7

Tue, Thu

R

Apr 14

Office hours

R

Apr 21

Office hours

F

Apr 28

Office hours, Final (Fri, May 2)

Register of Good Deeds

Errata to Bredon's Book


Announcements go here
Typed Notes
The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.
First Hour
We begin with a review of last class. Since no one has typed up the notes for last class yet, I will do the review here.
Recall had an association $M\rightarrow \Omega ^{k}(M)$ which was the "k forms on M" which equaled $\{w:\ p\in M\rightarrow A^{k}(T_{p}M)\}$
where
$A^{k}(V):=\{w:\underbrace {V\times \ldots \times V} _{k\ times}\rightarrow \mathbb {R} \}$
which is
1) Multilinear
2) Alternating
We had proved that :
1) $A^{k}(V)$ is a vector space
2) there was a wedge product $\wedge :A^{k}(V)\times A^{l}(V)\rightarrow A^{k+l}(V)$ via $\omega \wedge \lambda (v_{1},\ldots ,v_{k+l})=$${\frac {1}{k!l!}}\sum _{\sigma \in S_{k+l}}(1)^{\sigma }\omega (v_{\sigma (1)},\ldots ,v_{\sigma (k)})\lambda (v_{\sigma (k+1)},\ldots ,v_{\sigma (k+l)})$
that is
a) bilinear
b) associative
c) supercommutative, i.e., $\omega \wedge \lambda =(1)^{deg(w)deg(\lambda )}\lambda \wedge \omega$
From these definitions we can define for $\omega \in \Omega ^{k}(M)$ and $\lambda \in \Omega ^{l}(M)$ that $\omega \wedge \lambda \in \Omega ^{k+l}(M)$ with the same properties as above.
Claim
If $\omega _{1},\ldots ,\omega _{n}$ is a basis of $A^{1}(V)=V^{*}$ then $\{\omega _{I}=w_{i_{1}}\wedge \ldots \wedge \omega _{i_{k}}\ :\ I=(i_{1},\ldots ,i_{k})\ with\ 1\leq i_{1}<\ldots <i_{k}\leq n\}$ is a basis of $A^{k}(V)$ and $dimA^{k}(V)=nCk$
If $\omega _{1},\ldots ,\omega _{n}\in \Omega ^{1}(M)$ and $w_{1}_{p},\ldots ,\omega _{n}_{p}$ a basis of $(T_{p}M)^{*}\ \forall p\in M$ then any $\lambda$ can be written as $\lambda =\sum _{I}a_{I}(p)\omega _{I}$ where $a_{I}:M\rightarrow \mathbb {R}$ are smooth.
The equivalence of these is left as an exercise.
Example
Let us investigate $\Omega ^{*}(\mathbb {R} ^{3})$ (the * just means "anything").
Now, $(T_{p}(\mathbb {R} ^{3}))^{*}=<dx_{1},dx_{2},dx_{3}>$ where $x_{i}:\mathbb {R} ^{3}\rightarrow \mathbb {R}$ and so $dx_{i}_{p}:T_{p}\mathbb {R} ^{3}\rightarrow T_{x_{i}(p)}\mathbb {R} =\mathbb {R}$
Hence, $dx_{i}_{p}\in (T_{p}\mathbb {R} ^{3})^{*}$
Now, $dx_{j}({\frac {\partial }{\partial x}})={\frac {\partial }{\partial x_{i}}}x_{j}=\delta _{ij}$ and hence we get a basis.
So, $\Omega ^{1}(\mathbb {R} ^{3})=\{g_{1}dx_{1}+g_{2}dx_{2}+g_{3}dx_{3}\}\approx \{g_{1},g_{2},g_{3}\}\approx$ {vector fields on $\mathbb {R} ^{3}$}
where the $g_{i}:\mathbb {R} ^{3}\rightarrow \mathbb {R}$ are smooth.
$\Omega ^{0}(\mathbb {R} ^{3})=\{f:\mathbb {R} ^{3}\rightarrow \mathbb {R} \}$
This is because to each point p we associate something that takes zero copies of the tangent space into the real numbers. Thus to each p we associate a number.
$\Omega ^{3}(\mathbb {R} ^{3})=\{kdx_{1}\wedge dx_{2}\wedge dx_{3}\}\approx$ {functions} where again the k is just a smooth function from $\mathbb {R} ^{3}$ to $\mathbb {R}$.
$\Omega ^{2}(\mathbb {R} ^{3})=\{h_{1}dx_{2}\wedge dx_{3}+h_{2}dx_{3}\wedge dx_{1}+h_{3}dx_{1}\wedge dx_{2}\}\approx \{h_{1},h_{2},h_{3}\}\approx$ {vector fields}
Aside
Recall our earlier discussion of how points and things like points (curves, equivalence classes of curves) pushfoward while things dual to points (functions) pullback and that things dual to functions (such as derivations) push forward. See earlier for the precise definitions.
Now differential forms pull back, i.e., for $\phi :M\rightarrow N$ then $\phi ^{*}(\lambda )\in \Omega ^{k}(M)\leftarrow \lambda \in \Omega ^{k}(N)$
via
$\phi ^{*}(\lambda )(v_{1},\ldots ,v_{k})=\lambda (\phi _{*}v_{1},\ldots \phi _{*}v_{k})$
The pullback preserves all the properties discussed above and is well defined. In particular, it is compatible with the wedge product via $\phi ^{*}(\omega \wedge \lambda )=\phi ^{*}(\omega )\wedge \phi ^{*}(\lambda )$
TheoremDefinition
Given M, $\exists$ ! linear map $d:\Omega ^{k}(M)\rightarrow \Omega ^{k+1}(M)$ satisfies
1) If $f\in \Omega ^{0}(M)$ then $df(X)=X(f)$ for $X\in TM$
2) $d^{2}=0$. I.e. if $d_{k}:\Omega ^{k}\rightarrow \Omega ^{k+1}$ and $d_{k+1}:\Omega ^{k+1}\rightarrow \Omega ^{k+2}$ then $d_{k+1}\circ d_{k}=0$.
3) $d(\omega \wedge \lambda )=d\omega \wedge \lambda +(1)^{deg\omega }\omega \wedge d\lambda$
Second Hour
Some notes about the above definition:
1) When we restrict our d to functions we just get the old meaning for d.
2) Philosophically, there is a duality between differential forms and manifolds and that duality is given by integration. In this duality, d is the adjoint of the boundary operator on manifolds. For manifolds, the boundary of the boundary is empty and hence it is reasonable that $d^{2}=0$ on differential forms.
3) To remember the formula in 3 given above and others like it, it helps to keep in mind what objects are "odd" and what are "even" and thus when commuting such operators we will get the signs as you would expect from multiplying objects that are either odd or even.
Example
Let us aim for a formula for d on $\Omega ^{*}(\mathbb {R} ^{n})$.
Lets compute $d(\sum _{I}f_{I}dx_{I})$ where $dx_{I}=dx_{i_{1}}\wedge \ldots \wedge dx_{i_{k}}$ and $I=(i_{1},\ldots ,i_{k})$
Then, $d(\sum _{I}f_{I}dx_{I})=\sum _{I}d(f_{I}\wedge dx_{i_{1}}\wedge \ldots \wedge dx_{i_{k}})=\sum _{I}df_{I}\wedge (dx_{I})+f_{I}\wedge d(dx_{I})$
The last term vanishes because of (2) in the theorem (proving uniqueness!)
Now, as an aside, we claim that for $f\in \Omega ^{0}(M),\ df=\sum _{j=1}^{n}{\frac {\partial f}{\partial x_{j}}}dx_{j}$
Indeed, we know $(df)({\frac {\partial }{\partial x_{i}}})={\frac {\partial }{\partial x_{i}}}f$
However, $(\sum _{j=1}^{n}{\frac {\partial f}{\partial x_{j}}}dx_{j})({\frac {\partial }{\partial x_{i}}})=\sum _{j}{\frac {\partial f}{\partial x_{j}}}\delta _{ij}={\frac {\partial f}{\partial x_{i}}}$ which is the same.
Returning, we thus get $d(\sum _{I}f_{I}dx_{I})=\sum _{I,j}{\frac {\partial f_{I}}{\partial x_{j}}}dx_{j}\wedge dx_{I}$
Thus our d takes functions to vector fields by $f\mapsto ({\frac {\partial f}{\partial x_{1}}},{\frac {\partial f}{\partial x_{2}}},{\frac {\partial f}{\partial x_{3}}})$
This is just the grad operator from calculus and we can see that the d operator appropriately takes things from $\Omega ^{0}(M)$ to $\Omega ^{1}(M)$.
Now let us compute $d(h_{1}dx_{2}\wedge dx_{3}+h_{2}dx_{3}\wedge dx_{1}+h_{3}dx_{1}\wedge dx_{2})={\frac {\partial h_{1}}{\partial x_{1}}}dx_{1}\wedge dx_{2}\wedge dx_{3}+{\frac {\partial h_{1}}{\partial x_{2}}}dx_{2}\wedge dx_{2}\wedge dx_{3}+{\frac {\partial h_{1}}{\partial x_{3}}}dx_{3}\wedge dx_{2}\wedge dx_{3}$ + 6 more terms representing the 3 partials of each of the last 2 terms.
As each $dx_{i}\wedge dx_{i}$ term vanishes we are left with just,
$=({\frac {\partial h_{1}}{\partial x_{1}}}+{\frac {\partial h_{2}}{\partial x_{2}}}+{\frac {\partial h_{3}}{\partial x_{3}}})dx_{1}\wedge dx_{2}\wedge dx_{3}$
I.e., d takes $(h_{1},h_{2},h_{3})\mapsto \sum _{i}{\frac {\partial h_{i}}{\partial x_{i}}}$
this is just the div operator from calculus and appropriately takes vector fields to functions and represents the d from $\Omega ^{2}(M)$ to $\Omega ^{3}(M)$.
We are left with computing d from $\Omega ^{1}(M)$ to $\Omega ^{2}(M)$
Computing, $d(g_{1}dx_{1}+g_{2}dx_{2}+g_{3}dx_{3})=({\frac {\partial g_{3}}{\partial x_{2}}}{\frac {\partial g_{2}}{\partial x_{3}}})dx_{2}\wedge dx_{3}+({\frac {\partial g_{1}}{\partial x_{3}}}{\frac {\partial g_{3}}{\partial x_{1}}})dx_{3}\wedge dx_{1}+({\frac {\partial g_{2}}{\partial x_{1}}}{\frac {\partial g_{1}}{\partial x_{2}}})dx_{1}\wedge dx_{2}$
I.e., we just have the curl operator.
Note that the well known calculus laws that curl grad = 0 and div curl = 0 are just the expression that $d^{2}=0$.
To provide some physical insight to the meanings of these operators:
1) The gradient represents the direction of maximum descent. I.e. if you had a function on the plane the graph would look like the surface of a mountain range and the direction that water would run would be the gradient.
2) In a say compressible fluid, the divergence corresponds to the difference between in the inflow and outflow of fluid in some small epsilon box around a point.
3) The curl corresponds to the rotation vector for a ball. Ie consider a ball (of equal density to the liquid about it) going down a river. In the $x_{2}$, $x_{1}$ plane the tenancy for it to rotate clockwise would be given by ${\frac {\partial g_{2}}{\partial x_{1}}}{\frac {\partial g_{1}}{\partial x_{2}}}$