# 0708-1300/Class notes for Thursday, September 27

Announcements go here

## Class Notes

The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.

General comments regarding the wiki page

1) Use the history/recent changes to track your own work

Comments on Problem 4, page 71, Assignment 1

Dror gave three hints towards a solution to this this problem:

1) Consider the analogy with a (smooth) car which must stop when approaching a sharp bend. When it does stop, everything around the car, such as a tree, stops moving relative to the car as well

2) There is a map h going from the restriction of ${\displaystyle \mathbb {R} ^{2}}$ to our set into ${\displaystyle \mathbb {R} }$ as well as a map (f,g) going in reverse that satisfies ${\displaystyle h\circ (f,g)=I_{d}}$. We can then apply the chain rule (think about why!) to get ${\displaystyle h_{x}f'+h_{y}g'=1}$. However, ${\displaystyle f=\pm g}$ and both cases occur at adjacent points, resulting in ${\displaystyle f'=\pm g'}$ at adjacent point and thus establishing the contradiction.

3) This hint uses methods from beyond page 71. It is possible to find two linearly independent directional derivatives on functions on our set A near zero. However this is a contradiction as a one dimensional space cannot have a two dimensional tangent space.

At this point, the discussion returned to the previous days class regarding the theorem of the equivalence of our two definitions of a tangent vector. It was reiterated that a major point in proving the bijection between the two types of vectors was indeed onto is that it was possible, as a result of Hadamard's Lemma, to determine D by the n constants ${\displaystyle Dx_{i}}$

It is easily checked that the tangent space ${\displaystyle T_{0}\mathbb {R} ^{n}}$ forms an n dimensional vector space. This is because the D's are linear and because the D is determined by the n constants ${\displaystyle Dx_{i}}$.

We wish to generalize this concept to show that ${\displaystyle T_{p}M^{n}}$ is a vector space. This is easily done as there is a canonical isomorphism between ${\displaystyle T_{p}M^{n}}$ and ${\displaystyle T_{0}\mathbb {R} ^{n}}$ via the chart ${\displaystyle \varphi }$

${\displaystyle f(p)-f(0)=\int _{0}^{1}{\frac {d}{dt}}f(tp)\,dt}$ ${\displaystyle =\int _{0}^{1}\sum _{i=1}^{n}{\frac {\partial f}{\partial x_{i}}}(tp)x_{i}dt}$ ${\displaystyle =\sum _{i=1}^{n}x_{i}\int _{0}^{1}{\frac {\partial f}{\partial x_{i}}}(tp)dt}$ ${\displaystyle =\sum _{i=1}^{n}x_{i}g_{i}(p)}$

where ${\displaystyle g_{i}(p)=\int _{0}^{1}{\frac {\partial f}{\partial x_{i}}}(tp)dt}$

f is smooth with respect to p and so ${\displaystyle g_{i}}$ is, as derivatives with respect to p can pass through the integral which is with respect to t.

QED

Corollary:

${\displaystyle g_{i}(0)={\frac {\partial f}{\partial x_{i}}}(0)}$

Local Coordinates

${\displaystyle \mathbb {R} ^{n}}$ possesses canonical functions ${\displaystyle (x_{1},...,x_{n})}$ that are merely the levels ${\displaystyle x_{i}=const}$.

The pullback of these into the manifold under ${\displaystyle \varphi ^{-1}}$ yields a similar 'grid' of lines on the manifolds only these lines are curves. Formally, we equip the manifold with functions ${\displaystyle x_{1}^{o}=x_{1}\circ \varphi ~}$, ${\displaystyle x_{2}^{o}=x_{2}\circ \varphi ~}$, etc...

Now, ${\displaystyle \forall f:M\rightarrow \mathbb {R} \ \exists g:\mathbb {R} ^{n}\rightarrow \mathbb {R} }$ such that ${\displaystyle f=g\circ \varphi }$ and ${\displaystyle f(p)=g(\varphi (p))=g((x_{1}(\varphi (p)),...,x_{n}(\varphi (p)))=g(x_{1}^{o},...,x_{m}^{o})}$

Conventionally the distinction between x and ${\displaystyle x^{0}}$ is not made.

Question: How do you express ${\displaystyle D\in T_{p}(M)}$ using the local coordinates?

Claim

1) ${\displaystyle {\frac {\partial }{\partial x_{i}}}}$ is a tangent vector; ${\displaystyle {\frac {\partial }{\partial x_{i}}}(f):={\frac {\partial }{\partial x_{i}}}(g)}$ where ${\displaystyle g=f\circ \varphi ^{-1}}$

2) ${\displaystyle D=\sum (Dx_{i}){\frac {\partial }{\partial x_{i}}}}$

Proof

1) We need to check linearity and liebnitz's rule (easy)

2) We only need to check this on an arbitrary ${\displaystyle x_{j}}$ as they span all such functions. So, ${\displaystyle \sum Dx_{i}{\frac {\partial x_{j}}{\partial x_{i}}}=\sum Dx_{i}\delta _{i,j}=Dx_{j}}$