0708-1300/Class notes for Tuesday, November 6: Difference between revisions
No edit summary |
|||
Line 50: | Line 50: | ||
===Definition=== |
===Definition=== |
||
<i> |
|||
<p>For each <math>p,q \in \mathbb{N}\!</math> the <b>wedge product</b> is the map <math>\wedge : A^p(V) \times A^q(V) \to A^{p+q}(V), (\omega,\lambda) \mapsto \omega \wedge \lambda</math> defined by</p> |
<p>For each <math>p,q \in \mathbb{N}\!</math> the <b>wedge product</b> is the map <math>\wedge : A^p(V) \times A^q(V) \to A^{p+q}(V), (\omega,\lambda) \mapsto \omega \wedge \lambda</math> defined by</p> |
||
< |
<div align="center"><math>(\omega \wedge \lambda) (v_1,\ldots,v_{p+q}) = \sum_{\sigma \in S_{p,q}} (-1)^\sigma\omega(v_{\sigma(1)},\ldots,v_{\sigma(p)})\lambda(v_{\sigma(p+1)},\ldots,v_{\sigma(p+q)})</math> </div> |
||
<p> for every <math>v_1 ,\ldots,v_{p+q} \in V</math>, where <math>S_{p,q} = \{ \sigma \in S_{p+q} | \sigma(1) < \ldots < \sigma(p), \sigma(p+1) < \ldots < \sigma(p+q)\}</math>.<math>\Box</math> |
|||
</p> |
|||
</i> |
|||
<p>The idea behind this definition is to feed vectors to <math>\omega \wedge \lambda\!</math> in as many ways as possible. We could equally well have set </p> |
|||
<div align="center"> <math>(\omega \wedge \lambda) (v_1,\ldots,v_{p+q}) = \frac{1}{p!q!} \sum_{\sigma \in S_{p+q}} (-1)^\sigma\omega(v_{\sigma(1)},\ldots,v_{\sigma(p)})\lambda(v_{\sigma(p+1)},\ldots,v_{\sigma(p+q)})</math>. </div> |
|||
<p>The factor of <math>\frac{1}{p!q!}</math> compensates for the overcounting that we do by summing over all permutations, since their are <math>p!\!</math> ways of rearranging the <math>p\!</math> vectors fed to <math>\omega\!</math> if we don't care about order, but only one way if we do care. The same argument accounts for the <math>q!\!</math>.</p> |
|||
<p>Of course, as we have defined it, it is not immediately clear that <math>\omega \wedge \lambda\in A^{p+q}(V)\!</math>. However, multilinearity is obvious and it is fairly clear that the <math>(-1)^\sigma\!</math> takes care of the skew-symmetry. |
|||
<p>In fact, <math>\wedge\!</math> has a number of nice properties:</p> |
|||
===Proposition=== |
|||
<i> |
|||
<p>The following statements hold:</p> |
|||
<ol> |
|||
<li> <math>\wedge\!</math> is a bilinear map.> |
|||
<li> <math>\wedge\!</math> is associative. |
|||
<li> <math>\wedge\!</math> is <b>supercommutative</b>: <math>\omega \wedge \lambda = (-1)^{\mathrm{deg}(\omega)\mathrm{deg}(\lambda)} \lambda \wedge \omega\!</math>. |
|||
</ol> |
|||
</i> |
|||
====Proof==== |
|||
<p> Bilinearity is clear. Associativity and supercommutativity follow from some combinatorial arguments. <math>\Box\!</math> </p> |
|||
<br> |
|||
<p> It turns out that we can use the wedge product to find bases for <math>A^p(V)\!</math>:</p> |
|||
===Proposition=== |
|||
<i> |
|||
<p> If <math>\{\omega_1,\ldots,\omega_n \}\subset V^* \!</math> is a basis for <math>V^*\!</math> then <math>\{\omega_{i_1}\wedge\cdots\wedge\omega_{i_p} \in A^p(V) | i_1 < \ldots < i_p \}\!</math> is a basis for <math>A^p(V)\!</math></p> |
|||
</i> |
|||
====Proof==== |
|||
<p>Let <math>\{v_1,\ldots,v_n \}\subset V \!</math> be the dual basis to <math>\{\omega_1,\ldots,\omega_n \}</math>, so that <math>\omega_i(v_j) = \delta_{ij}\!</math>. Let <math>\rho_p = \{(i_1,\ldots,i_p) \in \mathbb{N}^p | i_1 < \ldots < i_p\}\!</math>. For <math>I,J \in \rho_p\!</math> with <math>I = (i_1,\ldots,i_p)\!</math> and <math>J= (j_1,\ldots,j_p)\!</math>, let <math>\omega_I = \omega_{i_1} \wedge \cdots \wedge \omega_{i_p}</math>, and let <math>v_J = (v_{j_1},\ldots,v_{j_p})</math>. Then <math>\omega_I(v_J) = 1\!</math> if <math>I=J\!</math> and <math>\omega_I(v_J) = 0\!</math> otherwise.</p> |
|||
<p>We claim that if <math>\omega \in A^p(V)\!</math>, then <math>\omega = \sum_{I\in\rho_p} \omega(v_I) \omega_I\!</math>. But <math>\sum_{I\in\rho_p} \omega(v_I) \omega_I(v_J) = \sum_{I\in\rho_p} \omega(v_I) \delta_{IJ} = \omega(v_J)\!</math>, so this is clear. We claim further that the <math>\omega_I\!</math> are linearly independent. But if <math>0 = \sum_{I \in \rho_p} \alpha_I \omega_I\!</math>, then <math>\alpha_J = 0 \! </math> by applying <math>\sum_{I \in \rho_p} \alpha_I \omega_I\!</math> to <math>v_J\!</math>. Hence the <math>\omega_I\!</math> form a basis.<math>\Box\!</math></p> |
|||
===Corollary=== |
|||
<i> |
|||
<p> <math>\mathrm{dim}(A^p(V)) = \frac{n!}{p!(n-p)!}\!</math>, where <math>n=\mathrm{dim}(V)\!</math>. <math>\Box\!</math></p> |
|||
</i> |
|||
<p> We may now define differential forms. The idea is to smoothly assign to each point <math>x\!</math> in a manifold <math>M\!</math> an element of <math>A^p(T_xM)\!</math>. </p> |
|||
===Definition=== |
|||
<i> |
|||
<p>Let <math> M\!</math> be a smooth manifold of dimension <math>m\!</math>. For <math>0 \le p \le m\!</math>, a <b>differential <math>p\!</math>-form on <math>M\!</math></b> (or simply a <b>p-form</b>) is an assignment to each <math>x \in M\!</math> an element <math>\omega_x \in A^p(T_x M)\!</math> that is smooth in the sense that if <math>X_1,\ldots,X_p\!</math> are smooth vector fields on <math>M\!</math> then the map <math>M \ni x \mapsto \omega_x(X_1(x),\ldots,X_p(x)) \in \mathbb{R}\!</math> is <math>C^\infty\!</math>.</p> |
|||
<p>The collection of <math>p\!</math>-forms on <math>M\!</math> will be denoted by <math>\Omega^p(M)\!</math>.<math>\Box\!</math></p> |
|||
</i> |
|||
<p>If <math>\omega_1,\ldots,\omega_n \in \Omega^1(M)\!</math> are such that <math>(\omega_1)_x,\ldots,(\omega_n)_x\!</math> form a basis for <math>(T_xM)^*\!</math> for each <math>x\in U\!</math> with <math>U \subset M\!</math> open, then <math>\lambda \in \Omega^k(M)\!</math> can be written (for <math>x\in U\!</math>) as</p> |
|||
<div align="center"> <math> \lambda_x = \sum_{I \in \rho_k} a_I(x) (\omega_I)_x </math> </div> |
|||
<p>where the maps <math>a_I : U \to \mathbb{R}\!</math> are smooth. In fact, we could have taken this property as our definition of smoothness on <math>U\!</math>.</P> |
|||
<p> for every <math>v_1 ,\ldots,v_{p+q} \in V</math>. </p> |
Revision as of 14:59, 17 November 2007
|
Class Notes
The notes below are by the students and for the students. Hopefully they are useful, but they come with no guarantee of any kind.
We will now shift our attention to the theory of integration on smooth manifolds. The first thing that we need to construct is a means of measuring volumes on manifolds. To accomplish this goal, we begin by imagining that we want to measure the volume of the "infinitiesimal" parallelepiped [1] defined a set of vectors by feeding these vectors into some function . We would like to satisfy a few properties:
- should be linear in each argument: for example, if we double the length of one of the sides, the volume should double.
- If two of the vectors fed to are parallel, the volume assigned by should be zero because the parallelepiped collapses to something with lower dimenion in this case.
Inspired by these requirements, we make the following definition:
Definition
Let be a real vector space, let and let denote the collection maps from to that are linear in each argument separately. We set
Proposition
Suppose that and . The following statements hold:
- has a natural vector space structure
- is
- is the dual space of
- for every
- If is a permutation, then
Proof
The first statement is easy to show and is left as an exercise. The second statement is more of a convenient definition. Note that consists of all maps that take no vectors and return a real number since the other properties are vacuous when the domain is empty. We can thus interpret an element in this space simply as a real number. The third statement is clear as the defintions of and coincide.
As for the fourth, note that so that using linearity we obtain
and hence .
The fifth statement then follows from repeated application of the fourth.
Remarks
Our computation in the previous proof shows that we could equally well have defined to consist of all those multilinear maps from to that change sign when two arguments are interchanged.
One of the nicest things about these spaces is that we can define a sort of multiplication of elements of with . This multiplication is called the wedge product and is defined as follows.
Definition
For each the wedge product is the map defined by
for every , where .
The idea behind this definition is to feed vectors to in as many ways as possible. We could equally well have set
The factor of compensates for the overcounting that we do by summing over all permutations, since their are ways of rearranging the vectors fed to if we don't care about order, but only one way if we do care. The same argument accounts for the .
Of course, as we have defined it, it is not immediately clear that . However, multilinearity is obvious and it is fairly clear that the takes care of the skew-symmetry.
In fact, has a number of nice properties:
Proposition
The following statements hold:
- is a bilinear map.>
- is associative.
- is supercommutative: .
Proof
Bilinearity is clear. Associativity and supercommutativity follow from some combinatorial arguments.
It turns out that we can use the wedge product to find bases for :
Proposition
If is a basis for then is a basis for
Proof
Let be the dual basis to , so that . Let . For with and , let , and let . Then if and otherwise.
We claim that if , then . But , so this is clear. We claim further that the are linearly independent. But if , then by applying to . Hence the form a basis.
Corollary
, where .
We may now define differential forms. The idea is to smoothly assign to each point in a manifold an element of .
Definition
Let be a smooth manifold of dimension . For , a differential -form on (or simply a p-form) is an assignment to each an element that is smooth in the sense that if are smooth vector fields on then the map is .
The collection of -forms on will be denoted by .
If are such that form a basis for for each with open, then can be written (for ) as
where the maps are smooth. In fact, we could have taken this property as our definition of smoothness on .