0708-1300/Class notes for Tuesday, November 6: Difference between revisions

From Drorbn
Jump to navigationJump to search
No edit summary
 
 
(5 intermediate revisions by 2 users not shown)
Line 4: Line 4:
<span style="color: red;">The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.</span>
<span style="color: red;">The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.</span>


<p>We will now shift our attention to the theory of integration on smooth manifolds. The first thing that we need to construct is a means of measuring volumes on manifolds. To accomplish this goal, we begin by imagining that we want to measure the volume of the "infinitiesimal" parallelepiped [http://en.wikipedia.org/wiki/Parallelepiped] defined a set of vectors <math>X_1 , \ldots ,X_k \in T_pM\!</math> by feeding these vectors into some function <math>\omega : (T_pM)^k \to \mathbb{R}\!</math>. We would like <math>\omega\!</math> to satisfy a few properties:</p>
<p>We will now shift our attention to the theory of integration on smooth manifolds. The first thing that we need to construct is a means of measuring volumes on manifolds. To accomplish this goal, we begin by imagining that we want to measure the volume of the "infinitiesimal" parallelepiped [http://en.wikipedia.org/wiki/Parallelepiped] defined by a set of vectors <math>X_1 , \ldots ,X_k \in T_pM\!</math> by feeding these vectors into some function <math>\omega : (T_pM)^k \to \mathbb{R}\!</math>. We would like <math>\omega\!</math> to satisfy a few properties:</p>


<ol>
<ol>
Line 14: Line 14:


===Definition===
===Definition===
<p>Let <math>V\!</math> be a real vector space, let <math>p \in \mathbb{N}\!</math> and let <math>L(V^p; \mathbb{R})</math> denote the collection maps from <math>V^p\!</math> to <math>\mathbb{R}</math> that are linear in each argument separately. We set </p>


<i>
<p><math>A^p(V) = \left\{ \omega \in L(V^p; \mathbb{R}) : \omega(\ldots,v,\ldots,v,\ldots) = 0\ \forall v \in V \right\}</math></p>

<p>Let <math>V\!</math> be a real vector space, let <math>p \in \mathbb{N}\!</math> and let <math>L(V^p; \mathbb{R})</math> denote the collection of maps from <math>V^p\!</math> to <math>\mathbb{R}</math> that are linear in each argument separately. We set </p>

<div align="center"><math>A^p(V) = \left\{ \omega \in L(V^p; \mathbb{R}) : \omega(\ldots,v,\ldots,v,\ldots) = 0\ \forall v \in V \right\}</math></div>

<p> and if <math>\omega \in A^p(V)\!</math>, we say that the <b>degree of <math>\omega\!</math></b> is <math>p\!</math> and write <math>\mathrm{deg}(\omega) = p\!</math>. <math>\Box\!</math></p>

</i>


===Proposition===
===Proposition===

<i>


<p>Suppose that <math>\omega \in A^p(V)\!</math> and <math>v_1,\ldots,v_p \in V\!</math>. The following statements hold:</p>
<p>Suppose that <math>\omega \in A^p(V)\!</math> and <math>v_1,\ldots,v_p \in V\!</math>. The following statements hold:</p>
Line 29: Line 38:
<li> If <math>\sigma \in S_p\!</math> is a permutation, then <math>\omega(v_{\sigma(1)},\ldots,v_{\sigma(p)}) = (-1)^\sigma \omega(v_1,\ldots,v_p)</math>
<li> If <math>\sigma \in S_p\!</math> is a permutation, then <math>\omega(v_{\sigma(1)},\ldots,v_{\sigma(p)}) = (-1)^\sigma \omega(v_1,\ldots,v_p)</math>
</ol>
</ol>

</i>


====Proof====
====Proof====
Line 40: Line 51:
<p> and hence <math>\omega(v_1, \ldots, v_k , \ldots, v_j, \ldots, v_p) + \omega(v_1, \ldots, v_j , \ldots, v_k, \ldots, v_p) = 0</math>.</p>
<p> and hence <math>\omega(v_1, \ldots, v_k , \ldots, v_j, \ldots, v_p) + \omega(v_1, \ldots, v_j , \ldots, v_k, \ldots, v_p) = 0</math>.</p>


<p> The fifth statement then follows from repeated application of the fourth. </p>
<p> The fifth statement then follows from repeated application of the fourth. <math>\Box\!</math></p>


<br>

===Remarks===


<p>Our computation in the previous proof shows that we could equally well have defined <math>A^p(V)\!</math> to consist of all those multilinear maps from <math>V^k\!</math> to <math>\mathbb{R}\!</math> that change sign when two arguments are interchanged.</p>
<p>Our computation in the previous proof shows that we could equally well have defined <math>A^p(V)\!</math> to consist of all those multilinear maps from <math>V^k\!</math> to <math>\mathbb{R}\!</math> that change sign when two arguments are interchanged.</p>
Line 50: Line 60:


===Definition===
===Definition===

<i>


<p>For each <math>p,q \in \mathbb{N}\!</math> the <b>wedge product</b> is the map <math>\wedge : A^p(V) \times A^q(V) \to A^{p+q}(V), (\omega,\lambda) \mapsto \omega \wedge \lambda</math> defined by</p>
<p>For each <math>p,q \in \mathbb{N}\!</math> the <b>wedge product</b> is the map <math>\wedge : A^p(V) \times A^q(V) \to A^{p+q}(V), (\omega,\lambda) \mapsto \omega \wedge \lambda</math> defined by</p>


<p> <math>(\omega \wedge \lambda) (v_1,\ldots,v_{p+q}) = \sum_{\sigma \in S_{p+q}} \frac{(-1)^\sigma}{p!q!} \omega(v_{\sigma(1)},\ldots,v_{\sigma(p)})\lambda(v_{\sigma(p+1)},\ldots,v_{\sigma(p+q)})</math> </p>
<div align="center"><math>(\omega \wedge \lambda) (v_1,\ldots,v_{p+q}) = \sum_{\sigma \in S_{p,q}} (-1)^\sigma\omega(v_{\sigma(1)},\ldots,v_{\sigma(p)})\lambda(v_{\sigma(p+1)},\ldots,v_{\sigma(p+q)})</math> </div>

<p> for every <math>v_1 ,\ldots,v_{p+q} \in V</math>, where <math>S_{p,q} = \{ \sigma \in S_{p+q} | \sigma(1) < \ldots < \sigma(p), \sigma(p+1) < \ldots < \sigma(p+q)\}</math>.<math>\Box</math>
</p>
</i>

<br>

<p>The idea behind this definition is to feed vectors to <math>\omega \wedge \lambda\!</math> in as many ways as possible. We could equally well have set </p>

<div align="center"> <math>(\omega \wedge \lambda) (v_1,\ldots,v_{p+q}) = \frac{1}{p!q!} \sum_{\sigma \in S_{p+q}} (-1)^\sigma\omega(v_{\sigma(1)},\ldots,v_{\sigma(p)})\lambda(v_{\sigma(p+1)},\ldots,v_{\sigma(p+q)})</math>. </div>

<p>The factor of <math>\frac{1}{p!q!}</math> compensates for the overcounting that we do by summing over all permutations, since there are <math>p!\!</math> ways of rearranging the <math>p\!</math> vectors fed to <math>\omega\!</math> if we don't care about order, but only one way if we do care. The same argument accounts for the <math>q!\!</math>.</p>

<p>Of course, as we have defined it, it is not immediately clear that <math>\omega \wedge \lambda\in A^{p+q}(V)\!</math>. However, multilinearity is obvious and it is fairly clear that the <math>(-1)^\sigma\!</math> takes care of the skew-symmetry.</p>

<p>In fact, <math>\wedge\!</math> has a number of nice properties:</p>

===Proposition===

<i>

<p>The following statements hold:</p>

<ol>
<li> <math>\wedge\!</math> is a bilinear map.
<li> <math>\wedge\!</math> is associative.
<li> <math>\wedge\!</math> is <b>supercommutative</b>: <math>\omega \wedge \lambda = (-1)^{\mathrm{deg}(\omega)\mathrm{deg}(\lambda)} \lambda \wedge \omega\!</math>.
</ol>

</i>

====Proof====

<p> Bilinearity is clear. Associativity and supercommutativity follow from some combinatorial arguments. <math>\Box\!</math> </p>

<br>

<p> It turns out that we can use the wedge product to find bases for <math>A^p(V)\!</math>:</p>
===Proposition===

<i>

<p> If <math>\{\omega_1,\ldots,\omega_n \}\subset V^* \!</math> is a basis for <math>V^*\!</math> then <math>\{\omega_{i_1}\wedge\cdots\wedge\omega_{i_p} \in A^p(V) | i_1 < \ldots < i_p \}\!</math> is a basis for <math>A^p(V)\!</math></p>

</i>

====Proof====

<p>Let <math>\{v_1,\ldots,v_n \}\subset V \!</math> be the dual basis to <math>\{\omega_1,\ldots,\omega_n \}</math>, so that <math>\omega_i(v_j) = \delta_{ij}\!</math>. Let <math>\rho_p = \{(i_1,\ldots,i_p) \in \mathbb{N}^p | i_1 < \ldots < i_p\}\!</math>. For <math>I,J \in \rho_p\!</math> with <math>I = (i_1,\ldots,i_p)\!</math> and <math>J= (j_1,\ldots,j_p)\!</math>, let <math>\omega_I = \omega_{i_1} \wedge \cdots \wedge \omega_{i_p}</math>, and let <math>v_J = (v_{j_1},\ldots,v_{j_p})</math>. Then <math>\omega_I(v_J) = 1\!</math> if <math>I=J\!</math> and <math>\omega_I(v_J) = 0\!</math> otherwise.</p>

<p>We claim that if <math>\omega \in A^p(V)\!</math>, then <math>\omega = \sum_{I\in\rho_p} \omega(v_I) \omega_I\!</math>. But <math>\sum_{I\in\rho_p} \omega(v_I) \omega_I(v_J) = \sum_{I\in\rho_p} \omega(v_I) \delta_{IJ} = \omega(v_J)\!</math>, so equality holds for ordered sequences of basis vectors. Equality then holds for any sequence of vectors by skew-symmetry and linearity. We claim further that the <math>\omega_I\!</math> are linearly independent. But if <math>0 = \sum_{I \in \rho_p} \alpha_I \omega_I\!</math>, then <math>\alpha_J = 0 \! </math> by applying <math>\sum_{I \in \rho_p} \alpha_I \omega_I\!</math> to <math>v_J\!</math>. Hence the <math>\omega_I\!</math> form a basis.<math>\Box\!</math></p>

===Corollary===

<i>

<p> <math>\mathrm{dim}(A^p(V)) = \frac{n!}{p!(n-p)!}\!</math>, where <math>n=\mathrm{dim}(V)\!</math>. <math>\Box\!</math></p>

</i>

<br>

<p> We may now define differential forms. The idea is to smoothly assign to each point <math>x\!</math> in a manifold <math>M\!</math> an element of <math>A^p(T_xM)\!</math>. </p>

===Definition===

<i>

<p>Let <math> M\!</math> be a smooth manifold of dimension <math>m\!</math>. For <math>0 \le p \le m\!</math>, a <b>differential <math>p\!</math>-form on <math>M\!</math></b> (or simply a <b>p-form</b>) is an assignment to each <math>x \in M\!</math> an element <math>\omega_x \in A^p(T_x M)\!</math> that is smooth in the sense that if <math>X_1,\ldots,X_p\!</math> are smooth vector fields on <math>M\!</math> then the map <math>M \ni x \mapsto \omega_x(X_1(x),\ldots,X_p(x)) \in \mathbb{R}\!</math> is <math>C^\infty\!</math>.</p>

<p>The collection of <math>p\!</math>-forms on <math>M\!</math> will be denoted by <math>\Omega^p(M)\!</math>.<math>\Box\!</math></p>

</i>

<br>

<p>If <math>\omega_1,\ldots,\omega_n \in \Omega^1(M)\!</math> are such that <math>(\omega_1)_x,\ldots,(\omega_n)_x\!</math> form a basis for <math>(T_xM)^*\!</math> for each <math>x\in U\!</math> with <math>U \subset M\!</math> open, then <math>\lambda \in \Omega^k(M)\!</math> can be written (for <math>x\in U\!</math>) as</p>


<div align="center"> <math> \lambda_x = \sum_{I \in \rho_k} a_I(x) (\omega_I)_x </math> </div>


<p>where the maps <math>a_I : U \to \mathbb{R}\!</math> are smooth. In fact, we could have taken this property as our definition of smoothness on <math>U\!</math>.</P>
<p> for every <math>v_1 ,\ldots,v_{p+q} \in V</math>. </p>

Latest revision as of 16:21, 10 February 2008

Announcements go here

Class Notes

The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.

We will now shift our attention to the theory of integration on smooth manifolds. The first thing that we need to construct is a means of measuring volumes on manifolds. To accomplish this goal, we begin by imagining that we want to measure the volume of the "infinitiesimal" parallelepiped [1] defined by a set of vectors by feeding these vectors into some function . We would like to satisfy a few properties:

  1. should be linear in each argument: for example, if we double the length of one of the sides, the volume should double.
  2. If two of the vectors fed to are parallel, the volume assigned by should be zero because the parallelepiped collapses to something with lower dimenion in this case.

Inspired by these requirements, we make the following definition:

Definition

Let be a real vector space, let and let denote the collection of maps from to that are linear in each argument separately. We set

and if , we say that the degree of is and write .

Proposition

Suppose that and . The following statements hold:

  1. has a natural vector space structure
  2. is
  3. is the dual space of
  4. for every
  5. If is a permutation, then

Proof

The first statement is easy to show and is left as an exercise. The second statement is more of a convenient definition. Note that consists of all maps that take no vectors and return a real number since the other properties are vacuous when the domain is empty. We can thus interpret an element in this space simply as a real number. The third statement is clear as the defintions of and coincide.

As for the fourth, note that so that using linearity we obtain

and hence .

The fifth statement then follows from repeated application of the fourth.


Our computation in the previous proof shows that we could equally well have defined to consist of all those multilinear maps from to that change sign when two arguments are interchanged.

One of the nicest things about these spaces is that we can define a sort of multiplication of elements of with . This multiplication is called the wedge product and is defined as follows.

Definition

For each the wedge product is the map defined by

for every , where .


The idea behind this definition is to feed vectors to in as many ways as possible. We could equally well have set

.

The factor of compensates for the overcounting that we do by summing over all permutations, since there are ways of rearranging the vectors fed to if we don't care about order, but only one way if we do care. The same argument accounts for the .

Of course, as we have defined it, it is not immediately clear that . However, multilinearity is obvious and it is fairly clear that the takes care of the skew-symmetry.

In fact, has a number of nice properties:

Proposition

The following statements hold:

  1. is a bilinear map.
  2. is associative.
  3. is supercommutative: .

Proof

Bilinearity is clear. Associativity and supercommutativity follow from some combinatorial arguments.


It turns out that we can use the wedge product to find bases for :

Proposition

If is a basis for then is a basis for

Proof

Let be the dual basis to , so that . Let . For with and , let , and let . Then if and otherwise.

We claim that if , then . But , so equality holds for ordered sequences of basis vectors. Equality then holds for any sequence of vectors by skew-symmetry and linearity. We claim further that the are linearly independent. But if , then by applying to . Hence the form a basis.

Corollary

, where .


We may now define differential forms. The idea is to smoothly assign to each point in a manifold an element of .

Definition

Let be a smooth manifold of dimension . For , a differential -form on (or simply a p-form) is an assignment to each an element that is smooth in the sense that if are smooth vector fields on then the map is .

The collection of -forms on will be denoted by .


If are such that form a basis for for each with open, then can be written (for ) as

where the maps are smooth. In fact, we could have taken this property as our definition of smoothness on .