Difference between revisions of "0708-1300/Class notes for Thursday, September 27"

From Drorbn
Jump to: navigation, search
m (Class Notes)
(Class Notes)
Line 75: Line 75:
2) We only need to check this on an arbitrary <math>x_j</math> as they span all such functions.   
2) We only need to check this on an arbitrary <math>x_j</math> as they span all such functions.   
So, <math>Dx_j = \sum Dx_i \frac{\partial x_j}{\partial x_i}
So, <math>\sum Dx_i \frac{\partial x_j}{\partial x_i}
  = \sum Dx_i \delta_{i,j} = Dx_j</math>
  = \sum Dx_i \delta_{i,j} = Dx_j</math>

Latest revision as of 14:35, 2 November 2007

Announcements go here

Class Notes

The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.

General comments regarding the wiki page

1) Use the history/recent changes to track your own work

2) Never post/upload without linking

Comments on Problem 4, page 71, Assignment 1

Dror gave three hints towards a solution to this this problem:

1) Consider the analogy with a (smooth) car which must stop when approaching a sharp bend. When it does stop, everything around the car, such as a tree, stops moving relative to the car as well

2) There is a map h going from the restriction of \mathbb{R}^{2} to our set into \mathbb{R} as well as a map (f,g) going in reverse that satisfies h\circ(f,g)=I_{d}. We can then apply the chain rule (think about why!) to get h_{x}f' + h_{y}g' = 1. However, f=\pm g and both cases occur at adjacent points, resulting in f' =\pm g' at adjacent point and thus establishing the contradiction.

3) This hint uses methods from beyond page 71. It is possible to find two linearly independent directional derivatives on functions on our set A near zero. However this is a contradiction as a one dimensional space cannot have a two dimensional tangent space.

At this point, the discussion returned to the previous days class regarding the theorem of the equivalence of our two definitions of a tangent vector. It was reiterated that a major point in proving the bijection between the two types of vectors was indeed onto is that it was possible, as a result of Hadamard's Lemma, to determine D by the n constants Dx_{i}

It is easily checked that the tangent space T_{0}\mathbb{R}^{n} forms an n dimensional vector space. This is because the D's are linear and because the D is determined by the n constants Dx_{i}.

We wish to generalize this concept to show that T_{p}M^{n} is a vector space. This is easily done as there is a canonical isomorphism between T_{p}M^{n} and T_{0}\mathbb{R}^{n} via the chart \varphi

Proof of Hadamard's Lemma

f(p)-f(0)=\int_0^1 \frac{d}{dt}f(tp)\, dt =\int_0^1 \sum_{i=1}^{n} \frac{\partial f}{\partial x_{i}}(tp)x_{i}dt =\sum_{i=1}^{n}x_{i}\int_0^1\frac{\partial f}{\partial x_{i}}(tp)dt =\sum_{i=1}^{n}x_{i}g_i (p)

where g_i (p)=\int_0^1\frac{\partial f}{\partial x_{i}}(tp)dt

f is smooth with respect to p and so g_i is, as derivatives with respect to p can pass through the integral which is with respect to t.



g_i(0)=\frac{\partial f}{\partial x_{i}}(0)

Local Coordinates

\mathbb{R}^n possesses canonical functions (x_1,...,x_n) that are merely the levels x_i = const.

The pullback of these into the manifold under \varphi^{-1} yields a similar 'grid' of lines on the manifolds only these lines are curves. Formally, we equip the manifold with functions x^{o}_1 = x_1\circ\varphi~, x^{o}_2 = x_2\circ\varphi~, etc...

Now, \forall f:M\rightarrow \mathbb{R}\ \exists g:\mathbb{R}^n\rightarrow \mathbb{R} such that f=g\circ\varphi and f(p) = g(\varphi(p)) = g((x_1(\varphi(p)),...,x_n(\varphi(p))) = g(x_1^{o},...,x_m^{o})

Conventionally the distinction between x and x^{0} is not made.

Question: How do you express D\in T_p(M) using the local coordinates?


1) \frac{\partial}{\partial x_i} is a tangent vector; \frac{\partial}{\partial x_i}(f):=\frac{\partial}{\partial x_i}(g) where g=f\circ\varphi^{-1}

2) D = \sum (Dx_i)\frac{\partial}{\partial x_i}


1) We need to check linearity and liebnitz's rule (easy)

2) We only need to check this on an arbitrary x_j as they span all such functions. So, \sum Dx_i \frac{\partial x_j}{\partial x_i}
 = \sum Dx_i \delta_{i,j} = Dx_j