Difference between revisions of "0708-1300/Class notes for Tuesday, October 2"

From Drorbn
Jump to: navigation, search
m
 
(10 intermediate revisions by 5 users not shown)
Line 34: Line 34:
 
1) points ''pushforward'' <math>x\mapsto\theta_*(x) := \theta(x)</math>
 
1) points ''pushforward'' <math>x\mapsto\theta_*(x) := \theta(x)</math>
  
2) Paths <math>\gamma:R\rightarrow X</math>, ie a bunch of points, ''pushforward'', <math>\gamma\rightarrow \theta_*(\gamma):=\theta\circ\gamma</math>
+
2) Paths <math>\gamma:\mathbb{R}\rightarrow X</math>, ie a bunch of points, ''pushforward'', <math>\gamma\rightarrow \theta_*(\gamma):=\theta\circ\gamma</math>
  
 
3) Sets <math>B\subset Y</math> ''pullback'' via <math>B\rightarrow \theta^*(B):=\theta^{-1}(B)</math>   
 
3) Sets <math>B\subset Y</math> ''pullback'' via <math>B\rightarrow \theta^*(B):=\theta^{-1}(B)</math>   
Line 49: Line 49:
  
  
7) We can consider operators on functions to be in a sense dual to the functions and hence should go in the opposite direction. Hence, tangent vectors, defined in the sense of derivations, ''pushforward'' via <math>D\rightarrow (\theta_*D)(f):= D(\theta^*f)</math>
+
7) We can consider operators on functions to be in a sense dual to the functions and hence should go in the opposite direction. Hence, tangent vectors, defined in the sense of derivations, ''pushforward'' via <math>D\rightarrow (\theta_*D)(f):= D(\theta^*f) = D(f\circ\theta)</math>
  
 
CHECK: This definition satisfies linearity and Liebnitz property.  
 
CHECK: This definition satisfies linearity and Liebnitz property.  
Line 64: Line 64:
 
<math>\theta_*D_{\gamma}f = D_{\theta_*\gamma}f</math>
 
<math>\theta_*D_{\gamma}f = D_{\theta_*\gamma}f</math>
  
for functions <math>f:Y\rightarrow R</math>
+
for functions <math>f:Y\rightarrow \mathbb{R}</math>
  
Now, <math>D_{\theta_*\gamma}f = \frac{d}{dt}f\circ(\theta_*\gamma)|_{t=0} = \frac{d}{dt}f\circ(\theta\circ\gamma)|_{t=0} = \frac{d}{dt}(f\circ\theta\gamma |_{t=0} = D_{\gamma}(f\circ\theta) =\theta_*D_{\gamma}f</math>
+
Now, <math>D_{\theta_*\gamma}f = \frac{d}{dt}f\circ(\theta_*\gamma)|_{t=0} = \frac{d}{dt}f\circ(\theta\circ\gamma)|_{t=0} = \frac{d}{dt}(f\circ\theta)\circ\gamma |_{t=0} = D_{\gamma}(f\circ\theta) =\theta_*D_{\gamma}f</math>
  
 
''Q.E.D''
 
''Q.E.D''
Line 89: Line 89:
 
Let us consider <math>\theta_*</math> on <math>T_pM</math> given a <math>\theta:M\rightarrow N</math>
 
Let us consider <math>\theta_*</math> on <math>T_pM</math> given a <math>\theta:M\rightarrow N</math>
  
We can arrange for charts <math>\varphi</math> on a subset of M into <math>R^m</math> (with coordinates denoted <math>(x_1,...,x_m)</math>)and <math>\psi</math> on a subset of N into <math>R^n</math> (with coordinates denoted <math>(y_1,...,y_n)</math>)such that <math>\varphi(p) = 0</math> and <math>\psi(\theta(p)=0</math>
+
We can arrange for charts <math>\varphi</math> on a subset of M into <math>\mathbb{R}^m</math> (with coordinates denoted <math>(x_1,\dots,x_m)</math>)and <math>\psi</math> on a subset of N into <math>\mathbb{R}^n</math> (with coordinates denoted <math>(y_1,\dots,y_n)</math>)such that <math>\varphi(p) = 0</math> and <math>\psi(\theta(p))=0</math>
  
Define <math>\theta^o = \psi\circ\theta\circ\varphi^{-1} = (\theta_1(x_1,...,x_m),...,\theta_n(x_1,...,x_m))</math>
+
Define <math>\theta^o = \psi\circ\theta\circ\varphi^{-1} = (\theta_1(x_1,\dots,x_m),\dots,\theta_n(x_1,\dots,x_m))</math>
  
  
Now, for a <math>D\in T_pM</math> we can write <math>D=\sum_i a_{i=1}^m\frac{\partial}{\partial x_i}</math>
+
Now, for a <math>D\in T_pM</math> we can write <math>D=\sum_{i=1}^m a_i\frac{\partial}{\partial x_i}</math>
  
So, <math>(\theta_*D)(f) = \sum_{i=1}^m a_i\frac{\partial}{\partial x_i}(f\circ\varphi) = \sum_{i=1}^m a_i \sum_{j=1}^n\frac{\partial f}{\partial y_j}\frac{\partial\theta_j}{\partial x_i}=</math>
+
So,
<math>=\begin{bmatrix}
+
{|
        \frac{\partial f}{\partial y_1} & ... & \frac{\partial f}{\partial y_n}\\
+
|-
\end{bmatrix}
+
|<math>(\theta_*D)(f) \!</math>
\begin{bmatrix}
+
|<math> = D(\theta^* f)\!</math>
 
+
|-
\frac{\partial \theta_1}{\partial x_1} & ... & \frac{\partial \theta_1}{\partial x_m}\\
+
|
...&  & ...\\
+
|<math>=\sum_{i=1}^m a_i\frac{\partial}{\partial x_i}(f\circ\theta) </math>
\frac{\partial \theta_n}{\partial x_1} & ... & \frac{\partial \theta_n}{\partial x_m}\\
+
|-
\end{bmatrix}
+
|
\begin{bmatrix}
+
|<math>=\sum_{i=1}^m a_i \sum_{j=1}^n\frac{\partial f}{\partial y_j}\frac{\partial\theta_j}{\partial x_i} </math>
        a_1\\
+
|-
...\\
+
|
a_m\\
+
|<math>=\begin{bmatrix}
\end{bmatrix}</math>
+
          \frac{\partial f}{\partial y_1} & \cdots & \frac{\partial f}{\partial y_n}\\
 +
        \end{bmatrix}
 +
        \begin{bmatrix}
 +
          \frac{\partial \theta_1}{\partial x_1} & \cdots & \frac{\partial \theta_1}{\partial x_m}\\
 +
          \vdots&  & \vdots\\
 +
          \frac{\partial \theta_n}{\partial x_1} & \cdots & \frac{\partial \theta_n}{\partial x_m}\\
 +
        \end{bmatrix}
 +
        \begin{bmatrix}
 +
          a_1\\
 +
          \vdots\\
 +
          a_m\\
 +
        \end{bmatrix}
 +
</math>
 +
|}
  
  
Line 116: Line 129:
  
 
and so, <math>b_k = (\theta_*D)y_k =\begin{bmatrix}
 
and so, <math>b_k = (\theta_*D)y_k =\begin{bmatrix}
         0&...,i,...&0\\
+
         0&\cdots & 1 & \cdots &0\\
 
\end{bmatrix}
 
\end{bmatrix}
 
\begin{bmatrix}
 
\begin{bmatrix}
  
\frac{\partial \theta_1}{\partial x_1} & ... & \frac{\partial \theta_1}{\partial x_m}\\
+
\frac{\partial \theta_1}{\partial x_1} & \cdots & \frac{\partial \theta_1}{\partial x_m}\\
...&  & ...\\
+
\vdots&  & \vdots\\
\frac{\partial \theta_n}{\partial x_1} & ... & \frac{\partial \theta_n}{\partial x_m}\\
+
\frac{\partial \theta_n}{\partial x_1} & \cdots & \frac{\partial \theta_n}{\partial x_m}\\
 
\end{bmatrix}
 
\end{bmatrix}
 
\begin{bmatrix}
 
\begin{bmatrix}
 
         a_1\\
 
         a_1\\
...\\
+
\vdots\\
 
a_m\\
 
a_m\\
 
\end{bmatrix}</math>
 
\end{bmatrix}</math>
  
where the i is at the kth location.  
+
where the 1 is at the kth location. In other words, <math>\theta_*D = \sum_{j=1}^{n} \sum_{i=1}^{m}a_i \frac{\partial \theta_j}{\partial x_i} \frac{\partial }{\partial y_j} </math>
  
  
Line 138: Line 151:
 
We can check the functorality, <math>(\lambda\circ\theta)_* = \lambda_*\circ\theta_*</math>, then <math>d(\lambda\circ\theta) = d\lambda\circ d\theta</math>
 
We can check the functorality, <math>(\lambda\circ\theta)_* = \lambda_*\circ\theta_*</math>, then <math>d(\lambda\circ\theta) = d\lambda\circ d\theta</math>
 
This is just the chain rule.
 
This is just the chain rule.
 
 
  
 
===Second Hour===
 
===Second Hour===
Line 157: Line 168:
 
'''Example 2'''
 
'''Example 2'''
  
This is the map from R to <math>R^2</math> that looks like a loop-de-loop on a roller coaster (but squashed into the plane of course!) The map <math>\theta</math> itself is NOT 1:1 (consider the crossover point) however <math>\theta_*</math> IS 1:1, hence an immersion.  
+
This is the map from <math>\mathbb{R}</math> to <math>\mathbb{R}^2</math> that looks like a loop-de-loop on a roller coaster (but squashed into the plane of course!) The map <math>\theta</math> itself is NOT 1:1 (consider the crossover point) however <math>\theta_*</math> IS 1:1, hence an immersion.  
  
  
 
'''Example 3'''
 
'''Example 3'''
  
Consider the map from R to <math>R^2</math> that looks like a check mark. While this map itself is 1:1, <math>\theta_*</math> is NOT 1:1 (at the cusp in the check mark) and hence is not an immersion.  
+
Consider the map from <math>\mathbb{R}</math> to <math>\mathbb{R}^2</math> that looks like a check mark. While this map itself is 1:1, <math>\theta_*</math> is NOT 1:1 (at the cusp in the check mark) and hence is not an immersion.  
  
  
Line 183: Line 194:
 
'''Example 5'''
 
'''Example 5'''
  
The torus, as a subset of <math>R^3</math> is an immersion
+
The torus, as a subset of <math>\mathbb{R}^3</math> is an immersion
  
  
Line 193: Line 204:
 
'''Theorem 2'''
 
'''Theorem 2'''
  
Locally, every immersion looks like the inclusion <math>\iota:R^m\rightarrow R^n</math>.  
+
Locally, every immersion looks like the inclusion <math>\iota:\mathbb{R}^m\rightarrow \mathbb{R}^n</math>.  
  
More precisely, if <math>\theta:M^m\rightarrow N^n</math> and <math>d\theta_p</math> is 1:1 then there exists charts <math>\varphi</math> acting on <math>U\subset M</math>  and <math>\phi</math> acting on <math>V\subset N</math> such that for <math>p\in U, \phi(p) = \psi(\theta(p)) = 0</math> such that the following diagram commutes:
+
More precisely, if <math>\theta:M^m\rightarrow N^n</math> and <math>d\theta_p</math> is 1:1 then there exist charts <math>\varphi</math> acting on <math>U\subset M</math>  and <math>\psi</math> acting on <math>V\subset N</math> such that for <math>p\in U, \varphi(p) = \psi(\theta(p)) = 0</math> such that the following diagram commutes:
  
  
 
<math>\begin{matrix}
 
<math>\begin{matrix}
U&\rightarrow^{\phi}&U'\subset R^n\\
+
U&\rightarrow^{\varphi}&U'\subset \mathbb{R}^m\\
 
\downarrow_{\theta} &&\downarrow_{\iota} \\
 
\downarrow_{\theta} &&\downarrow_{\iota} \\
V& \rightarrow^{\psi}& V'\subset R^n\\
+
V& \rightarrow^{\psi}& V'\subset \mathbb{R}^n\\
 
\end{matrix}</math>
 
\end{matrix}</math>
  
Line 219: Line 230:
 
'''Example 6'''
 
'''Example 6'''
  
The torus is a submanifold as the natural immersion into <math>R^3</math> is 1:1
+
The torus is a submanifold as the natural immersion into <math>\mathbb{R}^3</math> is 1:1
  
  
Line 229: Line 240:
 
'''Example 7'''
 
'''Example 7'''
  
Consider the map from <math>R\rightarrow R^2</math> whose graph looks like the open interval whose two ends have been wrapped around until they just touch (would intersect at one point if they were closed) the points 1/3 and 2/3rds of the way along the interval respectively.  
+
Consider the map from <math>\mathbb{R}\rightarrow \mathbb{R}^2</math> whose graph looks like the open interval whose two ends have been wrapped around until they just touch (would intersect at one point if they were closed) the points 1/3 and 2/3rds of the way along the interval respectively.  
 
The map is both 1:1 and an immersion. However, any neighborhood about the endpoints of the interval will ALSO include points near the 1/3rd and 2/3rd spots on the line, i.e., the topology is different and hence this is not an embedding.  
 
The map is both 1:1 and an immersion. However, any neighborhood about the endpoints of the interval will ALSO include points near the 1/3rd and 2/3rd spots on the line, i.e., the topology is different and hence this is not an embedding.  
  
Line 236: Line 247:
 
The functional structure on an embedded manifold induced by the functional structure on the containing manifold is equal to its original functional structure.  
 
The functional structure on an embedded manifold induced by the functional structure on the containing manifold is equal to its original functional structure.  
  
Indeed, for all smooth <math>f:M\rightarrow R</math> and <math>\forall p\in M</math> there exists a neighborhood V of <math>\theta(p)</math> and a smooth <math>g:N\rightarrow R</math> such that <math>g|_{\theta(M)\bigcap U} = f|_U</math>
+
Indeed, for all smooth <math>f:M\rightarrow \mathbb{R}</math> and <math>\forall p\in M</math> there exists a neighborhood V of <math>\theta(p)</math> and a smooth <math>g:N\rightarrow \mathbb{R}</math> such that <math>g|_{\theta(M)\bigcap U} = f|_U</math>
  
  
 
''Proof of Corollary 1''
 
''Proof of Corollary 1''
  
Loosely (and a sketch is most useful to see this!) we consider the embedded submanifold M in N and consider its image, under the appropriate charts, to a subset of <math>R^m\subset R^n</math>.  We then consider some function defined on M, and hence on the subset in R^n which we can extend canonically as a constant function in the "vertical" directions. Now simply pullback into N to get the extended member of the functional structure on N.  
+
Loosely (and a sketch is most useful to see this!) we consider the embedded submanifold M in N and consider its image, under the appropriate charts, to a subset of <math>\mathbb{R}^m\subset \mathbb{R}^n</math>.  We then consider some function defined on M, and hence on the subset in <math>\mathbb{R}^n</math> which we can extend canonically as a constant function in the "vertical" directions. Now simply pullback into N to get the extended member of the functional structure on N.  
  
  
 
''Proof of Theorem 2''
 
''Proof of Theorem 2''
  
We start with the normal situation of <math>\theta:M\rightarrow N</math> with M,N manifolds with atlases containing <math>(\varphi_0,U_)0)</math> and <math>(\psi_0, V_0)</math> respectively. We also expect that for <math>p\in U_0, \varphi_0(p) = \psi_0(\theta(p)) = 0</math>. I will first draw the diagram and will subsequently justify the relevant parts. The proof reduces to showing a certain part of the diagram commutes appropriately.  
+
We start with the normal situation of <math>\theta:M\rightarrow N</math> with M,N manifolds with atlases containing <math>(\varphi_0,U_0)</math> and <math>(\psi_0, V_0)</math> respectively. We also expect that for <math>p\in U_0, \varphi_0(p) = \psi_0(\theta(p)) = 0</math>. I will first draw the diagram and will subsequently justify the relevant parts. The proof reduces to showing a certain part of the diagram commutes appropriately.  
  
  
 
<math>
 
<math>
 
\begin{matrix}
 
\begin{matrix}
M\supset U_0 & \rightarrow^{\varphi_0} & U_1\subset R^m & \rightarrow^{Id} & U_2 = U_1 \\
+
M\supset U_0 & \rightarrow^{\varphi_0} & U_1\subset \mathbb{R}^m & \rightarrow^{Id} & U_2 = U_1 \\
 
\downarrow_{\theta} & &\downarrow_{\theta_1} & &\downarrow_{\iota}\\
 
\downarrow_{\theta} & &\downarrow_{\theta_1} & &\downarrow_{\iota}\\
N\supset V_0 & \rightarrow^{\psi_0} &  V_1\subset R^n & \leftarrow^{\xi} & V_2\\
+
N\supset V_0 & \rightarrow^{\psi_0} &  V_1\subset \mathbb{R}^n & \leftarrow^{\xi} & V_2\\
 
\end{matrix}
 
\end{matrix}
 
</math>
 
</math>

Latest revision as of 17:14, 6 November 2007

Announcements go here

Contents

English Spelling

Many interesting rules about 0708-1300/English Spelling

Class Notes

The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.


General class comments

1) The class photo is up, please add yourself

2) A questionnaire was passed out in class

3) Homework one is due on thursday



First Hour

Today's Theme: Locally a function looks like its differential


Pushforward/Pullback


Let \theta:X\rightarrow Y be a smooth map.

We consider various objects, defined with respect to X or Y, and see in which direction it makes sense to consider corresponding objects on the other space. In general \theta_* will denote the push forward, and \theta^* will denote the pullback.

1) points pushforward x\mapsto\theta_*(x) := \theta(x)

2) Paths \gamma:\mathbb{R}\rightarrow X, ie a bunch of points, pushforward, \gamma\rightarrow \theta_*(\gamma):=\theta\circ\gamma

3) Sets B\subset Y pullback via B\rightarrow \theta^*(B):=\theta^{-1}(B) Note that if one tried to pushforward sets A in X, the set operations compliment and intersection would not commute appropriately with the map \theta

4) A measures \mu pushforward via \mu\rightarrow (\theta_*\mu)(B) :=\mu(\theta^*B)

5)In some sense, we consider functions, "dual" to points and thus should go in the opposite direction of points, namely \theta^*f = f\circ\theta


6) Tangent vectors, defined in the sense of equivalence classes of paths, [\gamma] pushforward as we would expect since each path pushes forward. [\gamma]\rightarrow \theta_*[\gamma]:=[\theta_*\gamma] = [\theta\circ\gamma]

CHECK: This definition is well defined, that is, independent of the representative choice of \gamma


7) We can consider operators on functions to be in a sense dual to the functions and hence should go in the opposite direction. Hence, tangent vectors, defined in the sense of derivations, pushforward via D\rightarrow (\theta_*D)(f):= D(\theta^*f) = D(f\circ\theta)

CHECK: This definition satisfies linearity and Liebnitz property.


Theorem 1

The two definitions for the pushforward of a tangent vector coincide.

Proof:

Given a \gamma we can construct \theta_{*}\gamma as above. However from both \gamma and \theta_*\gamma we can also construct D_{\gamma}f and D_{\theta_*\gamma}f because we have previously shown our two definitions for the tangent vector are equivalent. We can then pushforward D_{\gamma}f to get \theta_*D_{\gamma}f. The theorem is reduced to the claim that:

\theta_*D_{\gamma}f = D_{\theta_*\gamma}f

for functions f:Y\rightarrow \mathbb{R}

Now, D_{\theta_*\gamma}f = \frac{d}{dt}f\circ(\theta_*\gamma)|_{t=0} = \frac{d}{dt}f\circ(\theta\circ\gamma)|_{t=0} = \frac{d}{dt}(f\circ\theta)\circ\gamma |_{t=0} = D_{\gamma}(f\circ\theta) =\theta_*D_{\gamma}f

Q.E.D

Functorality

let \theta:X\rightarrow Y, \lambda:Y\rightarrow Z

Consider some "object" s defined with respect to X and some "object u" defined with respect to Z. Something has the property of functorality if

\lambda_*(\theta_*s) = (\lambda\circ\theta)_*s

and

\theta^*(\lambda^*u) = (\lambda\circ\theta)^*u


Claim: All the classes we considered previously have the functorality property; in particular, the pushforward of tangent vectors does.


Let us consider \theta_* on T_pM given a \theta:M\rightarrow N

We can arrange for charts \varphi on a subset of M into \mathbb{R}^m (with coordinates denoted (x_1,\dots,x_m))and \psi on a subset of N into \mathbb{R}^n (with coordinates denoted (y_1,\dots,y_n))such that \varphi(p) = 0 and \psi(\theta(p))=0

Define \theta^o = \psi\circ\theta\circ\varphi^{-1} = (\theta_1(x_1,\dots,x_m),\dots,\theta_n(x_1,\dots,x_m))


Now, for a D\in T_pM we can write D=\sum_{i=1}^m a_i\frac{\partial}{\partial x_i}

So,

(\theta_*D)(f) \!  = D(\theta^* f)\!
=\sum_{i=1}^m a_i\frac{\partial}{\partial x_i}(f\circ\theta)
=\sum_{i=1}^m a_i \sum_{j=1}^n\frac{\partial f}{\partial y_j}\frac{\partial\theta_j}{\partial x_i}
=\begin{bmatrix}
          \frac{\partial f}{\partial y_1} & \cdots & \frac{\partial f}{\partial y_n}\\
        \end{bmatrix}
        \begin{bmatrix}
          \frac{\partial \theta_1}{\partial x_1} & \cdots & \frac{\partial \theta_1}{\partial x_m}\\
          \vdots&  & \vdots\\
          \frac{\partial \theta_n}{\partial x_1} & \cdots & \frac{\partial \theta_n}{\partial x_m}\\
        \end{bmatrix}
        \begin{bmatrix}
          a_1\\
          \vdots\\
          a_m\\
        \end{bmatrix}


Now, we want to write \theta_*D = \sum b_j\frac{\partial}{\partial y_j}

and so, b_k = (\theta_*D)y_k =\begin{bmatrix}
        0&\cdots & 1 & \cdots &0\\
\end{bmatrix}
\begin{bmatrix}

\frac{\partial \theta_1}{\partial x_1} & \cdots & \frac{\partial \theta_1}{\partial x_m}\\
\vdots&  & \vdots\\
\frac{\partial \theta_n}{\partial x_1} & \cdots & \frac{\partial \theta_n}{\partial x_m}\\
\end{bmatrix}
\begin{bmatrix}
        a_1\\
\vdots\\
a_m\\
\end{bmatrix}

where the 1 is at the kth location. In other words, \theta_*D = \sum_{j=1}^{n} \sum_{i=1}^{m}a_i \frac{\partial \theta_j}{\partial x_i} \frac{\partial }{\partial y_j}


So, \theta_* = d\theta_p, i.e., \theta_* is the differential of \theta at p


We can check the functorality, (\lambda\circ\theta)_* = \lambda_*\circ\theta_*, then d(\lambda\circ\theta) = d\lambda\circ d\theta This is just the chain rule.

Second Hour

Defintion 1

An immersion is a (smooth) map \theta:M^m\rightarrow N^n such that \theta_* of tangent vectors is 1:1. More precisely, d\theta_p: T_pM\rightarrow T_{\theta(p)}N is 1:1 \forall p\in M


Example 1

Consider the canonical immersion, for m<n given by \iota:(x_1,...,x_m)\mapsto (x_1,...,x_m,0,...,0) with n-m zeros.


Example 2

This is the map from \mathbb{R} to \mathbb{R}^2 that looks like a loop-de-loop on a roller coaster (but squashed into the plane of course!) The map \theta itself is NOT 1:1 (consider the crossover point) however \theta_* IS 1:1, hence an immersion.


Example 3

Consider the map from \mathbb{R} to \mathbb{R}^2 that looks like a check mark. While this map itself is 1:1, \theta_* is NOT 1:1 (at the cusp in the check mark) and hence is not an immersion.


Example 4

Can there be objects, such as the graph of |x| that are NOT an immersion, but are constructed from a smooth function?

Consider the function \lambda(x) = e^{-1/x^2} for x>0 and 0 otherwise.

Then the map x\mapsto \begin{bmatrix}
(\lambda(x),\lambda(x))& x>0\\
 (0,0)& x=0\\
 (-\lambda(-x),\lambda(-x)) & x<0\\
\end{bmatrix}


is a smooth mapping with the graph of |x| as its image, but is NOT an immersion.


Example 5

The torus, as a subset of \mathbb{R}^3 is an immersion


Now, consider the 1:1 linear map T:V\rightarrow W where V,W are vector spaces that takes (v_1,...,v_m)\mapsto  (Tv_1,...,Tv_m) = (w_1,..,w_m,w_{m+1},...,w_n)

From linear algebra we know that we can choose a basis such that T is represented by a matrix with 1's along the first m diagonal locations and zeros elsewhere.


Theorem 2

Locally, every immersion looks like the inclusion \iota:\mathbb{R}^m\rightarrow \mathbb{R}^n.

More precisely, if \theta:M^m\rightarrow N^n and d\theta_p is 1:1 then there exist charts \varphi acting on U\subset M and \psi acting on V\subset N such that for p\in U, \varphi(p) = \psi(\theta(p)) = 0 such that the following diagram commutes:


\begin{matrix}
U&\rightarrow^{\varphi}&U'\subset \mathbb{R}^m\\
\downarrow_{\theta} &&\downarrow_{\iota} \\
V& \rightarrow^{\psi}& V'\subset \mathbb{R}^n\\
\end{matrix}


that is, \iota\circ\varphi = \psi\circ\theta


Definition 2

M is a submanifold of N if there exists a mapping \theta:M\rightarrow N such that \theta is a 1:1 immersion.

Example 6

Our previous example of the graph of a "loop-de-loop", while an immersion, the function is not 1:1 and hence the graph is not a sub manifold.


Example 6

The torus is a submanifold as the natural immersion into \mathbb{R}^3 is 1:1


Definition 3

The map \theta:M\rightarrow N is an embedding if the subset topology on \theta(M) coincides with the topology induced from the original topology of M.


Example 7

Consider the map from \mathbb{R}\rightarrow \mathbb{R}^2 whose graph looks like the open interval whose two ends have been wrapped around until they just touch (would intersect at one point if they were closed) the points 1/3 and 2/3rds of the way along the interval respectively. The map is both 1:1 and an immersion. However, any neighborhood about the endpoints of the interval will ALSO include points near the 1/3rd and 2/3rd spots on the line, i.e., the topology is different and hence this is not an embedding.

Corollary 1 to Theorem 2

The functional structure on an embedded manifold induced by the functional structure on the containing manifold is equal to its original functional structure.

Indeed, for all smooth f:M\rightarrow \mathbb{R} and \forall p\in M there exists a neighborhood V of \theta(p) and a smooth g:N\rightarrow \mathbb{R} such that g|_{\theta(M)\bigcap U} = f|_U


Proof of Corollary 1

Loosely (and a sketch is most useful to see this!) we consider the embedded submanifold M in N and consider its image, under the appropriate charts, to a subset of \mathbb{R}^m\subset \mathbb{R}^n. We then consider some function defined on M, and hence on the subset in \mathbb{R}^n which we can extend canonically as a constant function in the "vertical" directions. Now simply pullback into N to get the extended member of the functional structure on N.


Proof of Theorem 2

We start with the normal situation of \theta:M\rightarrow N with M,N manifolds with atlases containing (\varphi_0,U_0) and (\psi_0, V_0) respectively. We also expect that for p\in U_0, \varphi_0(p) = \psi_0(\theta(p)) = 0. I will first draw the diagram and will subsequently justify the relevant parts. The proof reduces to showing a certain part of the diagram commutes appropriately.



\begin{matrix}
M\supset U_0 & \rightarrow^{\varphi_0} & U_1\subset \mathbb{R}^m & \rightarrow^{Id} & U_2 = U_1 \\
\downarrow_{\theta} & &\downarrow_{\theta_1} & &\downarrow_{\iota}\\
N\supset V_0 & \rightarrow^{\psi_0} &  V_1\subset \mathbb{R}^n & \leftarrow^{\xi} & V_2\\
\end{matrix}


It is very important to note that the \varphi_0 and \psi_0 are NOT the charts we are looking for , they are merely one of the ones that happen to act about the point p.

In the diagram above, \theta_1 = \psi_0\circ\theta\circ\varphi^{-1}. So, \theta_1(0) = 0 and (d\theta_1)_0 = i. Note the \theta_1, being merely the normal composition with the appropriate charts, does not fundamentally add anything. What makes this theorem work is the function \xi

We consider the map \xi:V_2\rightarrow V_1 given (x,y)\mapsto \theta_1(x) + (0,y). We note that \xi corresponds with the idea of "lifting" a flattened image back to its original height.


Claims:

1) \xi is invertible near zero. Indeed, computing d\xi_0 = I which is invertible as a matrix and hence \xi is invertible as a function near zero.

2) Take an x\in U_2. There are two routes to get to V_1 and upon computing both ways yields the same result. Hence, the diagram commutes.


Hence, our immersion looks (locally) like the standard immersion between real spaces given by \iota and the charts are the compositions going between U_0 to U_2 and V_0 to V_2

Q.E.D