http://drorbn.net/api.php?action=feedcontributions&user=Wongpak&feedformat=atomDrorbn - User contributions [en]2024-03-29T11:47:47ZUser contributionsMediaWiki 1.21.1http://drorbn.net/index.php?title=Talk:06-240Talk:06-2402006-12-13T15:11:45Z<p>Wongpak: /* Exam Forum */</p>
<hr />
<div>If anyone is interested in typesetting the lectures I think they should follow these Wikipedia guidelines: http://en.wikipedia.org/wiki/WP:MSM<br />
<br />
==Exam Forum==<br />
I hope you will excuse my intrusion onto the front page, but I thought the link might help increase participation. It will be removed right after the final exam.<br />
<br />
Also, my use of "help", leaving it ambiguous as to whether the person clicking on the link would be receiving or giving help, is intentional.<br />
<br />
== Modular Arithmetic ==<br />
<br />
This was particularly interesting after being introduced to Modular Multiplication tables and seeing some visual patterns with the numbers, such as the in the '1' column where the elements go from 1 to n-1 in Zn and backwards in the 'n-1' column.<br />
<br />
After searching around, it seems that people had been able to discover other, more interesting patterns!<br />
<br />
Make sure to analyze the tables since they begin from the bottom left corner instead of top left which we saw in class.<br />
<br />
http://whistleralley.com/mod/mod25.htm<br />
<br />
The following site allows you to see tables up to mod 30.<br />
<br />
http://www.cut-the-knot.org/blue/Modulo.shtml<br />
<br />
-Richard<br />
<br />
Also, notice how in modular multiplication tables for prime numbers, in specific for modulo 5 in the columns and rows for 0 and 5 only 0s appear. The 0s create a sort of frame around a 4x4 square of elements. Specifically all elements within the frame of 0s are between 1 and n-1 and all are non-zero. In the case of the mod 4 table there was a 0 which, as proved in class causes Z4 to fail as a field. There must be something deeper about all those 0s.<br />
<br />
== Mistake in the timetable in the main page of 06-240? ==<br />
<br />
In the timetable in the main page of 06-240,i.e. http://katlas.math.toronto.edu/drorbn/index.php?title=06-240 , I think there's a mistake about the date of the first test. There, it's written Oct 23th, but we have no class on Oct 23 which is a Monday. I guess the correct date should be Oct 24th as written in the course outline. <br />
<br />
-Yanshuai<br />
<br />
The dates on the time table are the dates of the Mondays in each week; the header says "week of...". --[[User:Drorbn|Drorbn]] 17:46, 9 October 2006 (EDT)<br />
<br />
== TA Office Hours? ==<br />
<br />
Do they have any?</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-13T12:22:51Z<p>Wongpak: /* Exam April 2004 #6(a) */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (i.e. solved/unsolved; '''whoever created the question must decide if it is solved, and sort it accordingly'''), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Rank of Matrices===<br />
Q: Prove that if rankA = 0 (for A with dimension m x n), then A is the zero matrix.<br />
(Question is found on p 166 (#3))<br />
Did anyone use transformations in this?<br />
My proof relies on rank(L_A) = 0 (left-multiplication transformation)implying L_A is the zero transformation.<br />
Is there an easier way?<br />
<br />
A: Depends on what you're allowed to assume. If you can use that the rank is equal to the number of linearly independent columns, then clearly all of the columns must be zero (otherwise you have at least one linearly independent vector).<br />
<br />
===Determinants===<br />
Q : If we can make same matrix with 2n-1 times of row swaps, what does it mean ? Does it mean that determinant is 0 ?<br />
<br />
R: Is this related to a question somewhere?<br />
<br />
===Sec 3.2 Ex. 19===<br />
Q: "Let A be an m x n matrix with rank m and B be an n x p matrix with rank n. Determine the rank of AB. Justify your answer." I know how to find that the rank can't be more than m (not much of an accomplishment), but I can't finish it.<br />
<br />
A: According to Theorem 3.7(a),(c)&(d)(p.159), I would say rank(AB) <math>\le </math>min(m, n).<br />
<br />
R: Can we not get any more specific than that?<br />
<br />
A: "Let <math>L_A, L_B, L_{AB} </math> have their usual meanings. Then <math>L_B : F^p -> F^n </math> is onto. Then we get <math> R(L_{AB}) = R(L_A L_B) = L_A L_B (F^p) = L_A (F^n) = R(L_A) </math>, i.e. <math>rank(L_{AB}) = rank(L_A) = m</math>."<br />
<br />
===Sec. 3.2 Ex. 21===<br />
Q: "Let A be an m x n matrix with rank m. Prove that there exists an n x m matrix B such that AB=<math>I_m</math>".<br />
<br />
A: "Take any n x m matrix B with rank n. By exercise 19 in the same section rank AB = rank A = m, hence AB is invertible. Let M be the inverse of AB, then (AB)M = A(BM) = I, i.e. BM is the desired matrix."<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Readings?===<br />
Q: Are we expected to section 5.2 of the textbook? Although the Assignments tell us to read it, we didn't do any questions, or cover it in class.<br />
<br />
A: I highly doubt it; hopefully someone will ask Prof. Bar-Natan tomorrow and post the answer here. There were a few other chapters that had sections we never really talked about either (some applications). Addendum: I second this request for a slight narrowing of what the relevant readings are--for instance, can we be more efficient in our reading of chapter 4 somehow?<br />
<br />
R: I think that if you want to cut down on Chapter 4, then skipping applications of area (discussed very briefly in class) and determinants of order 2 is the most you can do.<br />
<br />
R: What about 4.5, the axiomatic details. It discusses how the determinant is uniquely defined by the three axiomatic properties, but I don't think we did that in class.<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\a_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
R: Thx alot, the matricies are really helpful :)<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
A: The first line of the Lemma states, "Let T be an '''invertible''' linear trans..." So, T is onto(and 1-1), thus "T(beta) spans R(T) = W".<br />
<br />
R: Yes. My trouble was with the fact that invertibility implies onto-ness. I thought that if we had <math> T:P_2 (R)->P_6(R) </math>, and T(f) = xf, then T would still be invertible since you can 'recover' the f if you were given xf. I guess it makes more sense to not call T invertible in this case, because <math>T^{-1}</math> is technically only one-to-one over the range of T.<br />
<br />
R: T has to be both Onto and 1-1 so that it's invertible. In your example, some of the <math> f \in P_6(R)</math> will not be 'recovered' because they weren't mapped from <math> P_2(R)</math>. Furthermore, T<sup>-1</sup> has to map the whole vector space W back to V(as defined on p.99) but not the range only. In other words, if T is 1-1 only, <math>T^{-1} \circ T(v) = v, \forall v\in V</math> but <math>T \circ T^{-1}(w) \neq w, \exists w\in W</math>, because some T<sup>-1</sup>(w) are not defined.<br />
<br />
R: That nicely rigorizes what I was thinking, and I'm convinced. Thanks.<br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable, therefore there exists an invertible matrix Q s.t Q<sup>-1</sup>TQ = D, where D is a diagonal matrix. Therefore, Q<sup>-1</sup>(U<Sup>-1</Sup>AU)Q = D iff (UQ)<Sup>-1</Sup>A(UQ) = D (because U and Q invertible, Q<sup>-1</sup>U<Sup>-1</Sup> = (UQ)<Sup>-1</Sup>), it follows that A is diagonalizable.<br />
<br />
===Exam April 2004 #6(a)===<br />
Q: Suppose A is an invertible matrix for which the sum of entries of each row is a scalar <math>\lambda</math>. Show that the sum of entries of each row of A<sup>-1</sup> is <math>1/\lambda</math>. (Hint: find an eigenvector for A with eigenvalue <math>\lambda</math>.)<br />
If A is a diagonal matrix, then it's obvious that the sum of entries of each row is <math>\lambda</math> and the sum of entries of each row of A<sup>-1</sup> is <math>1/\lambda</math>. I was stuck with a more general invertible matrix.<br />
<br />
A: Following the hint, you can see that an eigenvector corresponding to <math>\lambda</math> is (1, 1, 1, ...)*. Therefore <math> Av=\lambda v</math>, and rearranging you get <math> A^{-1}v=1/\lambda v</math>. Using the same logic as before, you can show that since this <math>\lambda</math> corresponds to a homogeneous system of equations with the same eigenvector v = (1, 1, 1, ...), the sum of each row is equal to <math>1/\lambda</math>.<br />
<br />
*Just to elaborate on the first part, you are looking for a vector <math> v = (x_1, x_2, x_3, ...) </math> so that <math> A-\lambda I = 0</math>. This corresponds to the system:<br />
<math>\begin{pmatrix}(a_{11}-\lambda)x_1&a_{12}x_2&a_{13}x_3&...\\a_{21}x_1&(a_{22}-\lambda)x_2&a_{23}x_3&...\\<br />
a_{11}x_1&a_{12}x_1&(a_{13}-\lambda)x_3&...\end{pmatrix}</math>,<br />
and so in each row you can see that <math>x_1=1, x_2=1, x_3=1</math> works because then all the a's in each row add up to <math>\lambda</math>.<br />
<br />
*Also, does anyone know how to do part (b) of that question? My guess is to make one subspace {0}, the second (t,0,0) and the third (0,r,s) for all t,r,s,. Does that look okay?<br />
<br />
R: Thanks. I think the subspaces are {0}, {(t,0,0)} and {(0,s,0)} so that <math>R^3 \neq W_1 \oplus W_2 \oplus W_3</math>.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-13T12:11:54Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (i.e. solved/unsolved; '''whoever created the question must decide if it is solved, and sort it accordingly'''), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Rank of Matrices===<br />
Q: Prove that if rankA = 0 (for A with dimension m x n), then A is the zero matrix.<br />
(Question is found on p 166 (#3))<br />
Did anyone use transformations in this?<br />
My proof relies on rank(L_A) = 0 (left-multiplication transformation)implying L_A is the zero transformation.<br />
Is there an easier way?<br />
<br />
A: Depends on what you're allowed to assume. If you can use that the rank is equal to the number of linearly independent columns, then clearly all of the columns must be zero (otherwise you have at least one linearly independent vector).<br />
<br />
===Determinants===<br />
Q : If we can make same matrix with 2n-1 times of row swaps, what does it mean ? Does it mean that determinant is 0 ?<br />
<br />
R: Is this related to a question somewhere?<br />
<br />
===Sec 3.2 Ex. 19===<br />
Q: "Let A be an m x n matrix with rank m and B be an n x p matrix with rank n. Determine the rank of AB. Justify your answer." I know how to find that the rank can't be more than m (not much of an accomplishment), but I can't finish it.<br />
<br />
A: According to Theorem 3.7(a),(c)&(d)(p.159), I would say rank(AB) <math>\le </math>min(m, n).<br />
<br />
R: Can we not get any more specific than that?<br />
<br />
A: "Let <math>L_A, L_B, L_{AB} </math> have their usual meanings. Then <math>L_B : F^p -> F^n </math> is onto. Then we get <math> R(L_{AB}) = R(L_A L_B) = L_A L_B (F^p) = L_A (F^n) = R(L_A) </math>, i.e. <math>rank(L_{AB}) = rank(L_A) = m</math>."<br />
<br />
===Sec. 3.2 Ex. 21===<br />
Q: "Let A be an m x n matrix with rank m. Prove that there exists an n x m matrix B such that AB=<math>I_m</math>".<br />
<br />
A: "Take any n x m matrix B with rank n. By exercise 19 in the same section rank AB = rank A = m, hence AB is invertible. Let M be the inverse of AB, then (AB)M = A(BM) = I, i.e. BM is the desired matrix."<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Readings?===<br />
Q: Are we expected to section 5.2 of the textbook? Although the Assignments tell us to read it, we didn't do any questions, or cover it in class.<br />
<br />
A: I highly doubt it; hopefully someone will ask Prof. Bar-Natan tomorrow and post the answer here. There were a few other chapters that had sections we never really talked about either (some applications). Addendum: I second this request for a slight narrowing of what the relevant readings are--for instance, can we be more efficient in our reading of chapter 4 somehow?<br />
<br />
R: I think that if you want to cut down on Chapter 4, then skipping applications of area (discussed very briefly in class) and determinants of order 2 is the most you can do.<br />
<br />
R: What about 4.5, the axiomatic details. It discusses how the determinant is uniquely defined by the three axiomatic properties, but I don't think we did that in class.<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\a_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
R: Thx alot, the matricies are really helpful :)<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
A: The first line of the Lemma states, "Let T be an '''invertible''' linear trans..." So, T is onto(and 1-1), thus "T(beta) spans R(T) = W".<br />
<br />
R: Yes. My trouble was with the fact that invertibility implies onto-ness. I thought that if we had <math> T:P_2 (R)->P_6(R) </math>, and T(f) = xf, then T would still be invertible since you can 'recover' the f if you were given xf. I guess it makes more sense to not call T invertible in this case, because <math>T^{-1}</math> is technically only one-to-one over the range of T.<br />
<br />
R: T has to be both Onto and 1-1 so that it's invertible. In your example, some of the <math> f \in P_6(R)</math> will not be 'recovered' because they weren't mapped from <math> P_2(R)</math>. Furthermore, T<sup>-1</sup> has to map the whole vector space W back to V(as defined on p.99) but not the range only. In other words, if T is 1-1 only, <math>T^{-1} \circ T(v) = v, \forall v\in V</math> but <math>T \circ T^{-1}(w) \neq w, \exists w\in W</math>, because some T<sup>-1</sup>(w) are not defined.<br />
<br />
R: That nicely rigorizes what I was thinking, and I'm convinced. Thanks.<br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable, therefore there exists an invertible matrix Q s.t Q<sup>-1</sup>TQ = D, where D is a diagonal matrix. Therefore, Q<sup>-1</sup>(U<Sup>-1</Sup>AU)Q = D iff (UQ)<Sup>-1</Sup>A(UQ) = D (because U and Q invertible, Q<sup>-1</sup>U<Sup>-1</Sup> = (UQ)<Sup>-1</Sup>), it follows that A is diagonalizable.<br />
<br />
===Exam April 2004 #6(a)===<br />
Q: Suppose A is an invertible matrix for which the sum of entries of each row is a scalar <math>\lambda</math>. Show that the sum of entries of each row of A<sup>-1</sup> is <math>1/\lambda</math>. (Hint: find an eigenvector for A with eigenvalue <math>\lambda</math>.)<br />
If A is a diagonal matrix, then it's obvious that the sum of entries of each row is <math>\lambda</math> and the sum of entries of each row of A<sup>-1</sup> is <math>1/\lambda</math>. I was stuck with a more general invertible matrix.<br />
<br />
A: Following the hint, you can see that an eigenvector corresponding to <math>\lambda</math> is (1, 1, 1, ...)*. Therefore <math> Av=\lambda v</math>, and rearranging you get <math> A^{-1}v=1/\lambda v</math>. Using the same logic as before, you can show that since this <math>\lambda</math> corresponds to a homogeneous system of equations with the same eigenvector v = (1, 1, 1, ...), the sum of each row is equal to <math>1/\lambda</math>.<br />
<br />
*Just to elaborate on the first part, you are looking for a vector <math> v = (x_1, x_2, x_3, ...) </math> so that <math> A-\lambda I = 0</math>. This corresponds to the system:<br />
<math>\begin{pmatrix}(a_{11}-\lambda)x_1&a_{12}x_2&a_{13}x_3&...\\a_{21}x_1&(a_{22}-\lambda)x_2&a_{23}x_3&...\\<br />
a_{11}x_1&a_{12}x_1&(a_{13}-\lambda)x_3&...\end{pmatrix}</math>,<br />
and so in each row you can see that <math>x_1=1, x_2=1, x_3=3</math> works because then all the a's in each row add up to <math>\lambda</math>.<br />
<br />
*Also, does anyone know how to do part (b) of that question? My guess is to make one subspace {0}, the second (t,0,0) and the third (0,r,s) for all t,r,s,. Does that look okay?<br />
<br />
R: Thanks. I think the subspaces are {0}, {(t,0,0)} and {(0,s,0)} so that <math>R^3 \neq W_1 \oplus W_2 \oplus W_3</math>.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-13T12:10:52Z<p>Wongpak: /* Exam April 2004 #6(a) */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (i.e. solved/unsolved; '''whoever created the question must decide if it is solved, and sort it accordingly'''), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Rank of Matrices===<br />
Q: Prove that if rankA = 0 (for A with dimension m x n), then A is the zero matrix.<br />
(Question is found on p 166 (#3))<br />
Did anyone use transformations in this?<br />
My proof relies on rank(L_A) = 0 (left-multiplication transformation)implying L_A is the zero transformation.<br />
Is there an easier way?<br />
<br />
A: Depends on what you're allowed to assume. If you can use that the rank is equal to the number of linearly independent columns, then clearly all of the columns must be zero (otherwise you have at least one linearly independent vector).<br />
<br />
===Determinants===<br />
Q : If we can make same matrix with 2n-1 times of row swaps, what does it mean ? Does it mean that determinant is 0 ?<br />
<br />
R: Is this related to a question somewhere?<br />
<br />
===Sec 3.2 Ex. 19===<br />
Q: "Let A be an m x n matrix with rank m and B be an n x p matrix with rank n. Determine the rank of AB. Justify your answer." I know how to find that the rank can't be more than m (not much of an accomplishment), but I can't finish it.<br />
<br />
A: According to Theorem 3.7(a),(c)&(d)(p.159), I would say rank(AB) <math>\le </math>min(m, n).<br />
<br />
R: Can we not get any more specific than that?<br />
<br />
A: "Let <math>L_A, L_B, L_{AB} </math> have their usual meanings. Then <math>L_B : F^p -> F^n </math> is onto. Then we get <math> R(L_{AB}) = R(L_A L_B) = L_A L_B (F^p) = L_A (F^n) = R(L_A) </math>, i.e. <math>rank(L_{AB}) = rank(L_A) = m</math>."<br />
<br />
===Sec. 3.2 Ex. 21===<br />
Q: "Let A be an m x n matrix with rank m. Prove that there exists an n x m matrix B such that AB=<math>I_m</math>".<br />
<br />
A: "Take any n x m matrix B with rank n. By exercise 19 in the same section rank AB = rank A = m, hence AB is invertible. Let M be the inverse of AB, then (AB)M = A(BM) = I, i.e. BM is the desired matrix."<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Readings?===<br />
Q: Are we expected to section 5.2 of the textbook? Although the Assignments tell us to read it, we didn't do any questions, or cover it in class.<br />
<br />
A: I highly doubt it; hopefully someone will ask Prof. Bar-Natan tomorrow and post the answer here. There were a few other chapters that had sections we never really talked about either (some applications). Addendum: I second this request for a slight narrowing of what the relevant readings are--for instance, can we be more efficient in our reading of chapter 4 somehow?<br />
<br />
R: I think that if you want to cut down on Chapter 4, then skipping applications of area (discussed very briefly in class) and determinants of order 2 is the most you can do.<br />
<br />
R: What about 4.5, the axiomatic details. It discusses how the determinant is uniquely defined by the three axiomatic properties, but I don't think we did that in class.<br />
<br />
===Exam April 2004 #6(a)===<br />
Q: Suppose A is an invertible matrix for which the sum of entries of each row is a scalar <math>\lambda</math>. Show that the sum of entries of each row of A<sup>-1</sup> is <math>1/\lambda</math>. (Hint: find an eigenvector for A with eigenvalue <math>\lambda</math>.)<br />
If A is a diagonal matrix, then it's obvious that the sum of entries of each row is <math>\lambda</math> and the sum of entries of each row of A<sup>-1</sup> is <math>1/\lambda</math>. I was stuck with a more general invertible matrix.<br />
<br />
A: Following the hint, you can see that an eigenvector corresponding to <math>\lambda</math> is (1, 1, 1, ...)*. Therefore <math> Av=\lambda v</math>, and rearranging you get <math> A^{-1}v=1/\lambda v</math>. Using the same logic as before, you can show that since this <math>\lambda</math> corresponds to a homogeneous system of equations with the same eigenvector v = (1, 1, 1, ...), the sum of each row is equal to <math>1/\lambda</math>.<br />
<br />
*Just to elaborate on the first part, you are looking for a vector <math> v = (x_1, x_2, x_3, ...) </math> so that <math> A-\lambda I = 0</math>. This corresponds to the system:<br />
<math>\begin{pmatrix}(a_{11}-\lambda)x_1&a_{12}x_2&a_{13}x_3&...\\a_{21}x_1&(a_{22}-\lambda)x_2&a_{23}x_3&...\\<br />
a_{11}x_1&a_{12}x_1&(a_{13}-\lambda)x_3&...\end{pmatrix}</math>,<br />
and so in each row you can see that <math>x_1=1, x_2=1, x_3=3</math> works because then all the a's in each row add up to <math>\lambda</math>.<br />
<br />
*Also, does anyone know how to do part (b) of that question? My guess is to make one subspace {0}, the second (t,0,0) and the third (0,r,s) for all t,r,s,. Does that look okay?<br />
<br />
R: Thanks. I think the subspaces are {0}, {(t,0,0)} and {(0,s,0)} so that <math>R^3 \neq W_1 \oplus W_2 \oplus W_3</math>.<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\a_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
R: Thx alot, the matricies are really helpful :)<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
A: The first line of the Lemma states, "Let T be an '''invertible''' linear trans..." So, T is onto(and 1-1), thus "T(beta) spans R(T) = W".<br />
<br />
R: Yes. My trouble was with the fact that invertibility implies onto-ness. I thought that if we had <math> T:P_2 (R)->P_6(R) </math>, and T(f) = xf, then T would still be invertible since you can 'recover' the f if you were given xf. I guess it makes more sense to not call T invertible in this case, because <math>T^{-1}</math> is technically only one-to-one over the range of T.<br />
<br />
R: T has to be both Onto and 1-1 so that it's invertible. In your example, some of the <math> f \in P_6(R)</math> will not be 'recovered' because they weren't mapped from <math> P_2(R)</math>. Furthermore, T<sup>-1</sup> has to map the whole vector space W back to V(as defined on p.99) but not the range only. In other words, if T is 1-1 only, <math>T^{-1} \circ T(v) = v, \forall v\in V</math> but <math>T \circ T^{-1}(w) \neq w, \exists w\in W</math>, because some T<sup>-1</sup>(w) are not defined.<br />
<br />
R: That nicely rigorizes what I was thinking, and I'm convinced. Thanks.<br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable, therefore there exists an invertible matrix Q s.t Q<sup>-1</sup>TQ = D, where D is a diagonal matrix. Therefore, Q<sup>-1</sup>(U<Sup>-1</Sup>AU)Q = D iff (UQ)<Sup>-1</Sup>A(UQ) = D (because U and Q invertible, Q<sup>-1</sup>U<Sup>-1</Sup> = (UQ)<Sup>-1</Sup>), it follows that A is diagonalizable.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-12T21:05:13Z<p>Wongpak: ===Exam April 2004 #6(a)===</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (i.e. solved/unsolved; '''whoever created the question must decide if it is solved'''), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Rank of Matrices===<br />
Q: Prove that if rankA = 0 (for A with dimension m x n), then A is the zero matrix.<br />
(Question is found on p 166 (#3))<br />
Did anyone use transformations in this?<br />
My proof relies on rank(L_A) = 0 (left-multiplication transformation)implying L_A is the zero transformation.<br />
Is there an easier way?<br />
<br />
A: Depends on what you're allowed to assume. If you can use that the rank is equal to the number of linearly independent columns, then clearly all of the columns must be zero (otherwise you have at least one linearly independent vector).<br />
<br />
===Determinants===<br />
Q : If we can make same matrix with 2n-1 times of row swaps, what does it mean ? Does it mean that determinant is 0 ?<br />
<br />
R: Is this related to a question somewhere?<br />
<br />
===Sec 3.2 Ex. 19===<br />
Q: "Let A be an m x n matrix with rank m and B be an n x p matrix with rank n. Determine the rank of AB. Justify your answer." I know how to find that the rank can't be more than m (not much of an accomplishment), but I can't finish it.<br />
<br />
A: According to Theorem 3.7(a),(c)&(d)(p.159), I would say rank(AB) <math>\le </math>min(m, n).<br />
<br />
R: Can we not get any more specific than that?<br />
<br />
===Sec. 3.2 Ex. 21===<br />
Q: "Let A be an m x n matrix with rank m. Prove that there exists an n x m matrix B such that AB=<math>I_m</math>".<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Readings?===<br />
Q: Are we expected to section 5.2 of the textbook? Although the Assignments tell us to read it, we didn't do any questions, or cover it in class.<br />
<br />
A: I highly doubt it; hopefully someone will ask Prof. Bar-Natan tomorrow and post the answer here. There were a few other chapters that had sections we never really talked about either (some applications). Addendum: I second this request for a slight narrowing of what the relevant readings are--for instance, can we be more efficient in our reading of chapter 4 somehow?<br />
<br />
R: I think that if you want to cut down on Chapter 4, then skipping applications of area (discussed very briefly in class) and determinants of order 2 is the most you can do.<br />
<br />
R: What about 4.5, the axiomatic details. It discusses how the determinant is uniquely defined by the three axiomatic properties, but I don't think we did that in class.<br />
<br />
===Exam April 2004 #6(a)===<br />
Q: Suppose A is an invertible matrix for which the sum of entries of each row is a scalar <math>\lambda</math>. Show that the sum of entries of each row of A<sup>-1</sup> is <math>1/\lambda</math>. (Hint: find an eigenvector for A with eigenvalue <math>\lambda</math>.)<br />
If A is a diagonal matrix, then it's obvious that the sum of entries of each row is <math>\lambda</math> and the sum of entries of each row of A<sup>-1</sup> is <math>1/\lambda</math>. I was stuck with a more general invertible matrix.<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\a_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
R: Thx alot, the matricies are really helpful :)<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
A: The first line of the Lemma states, "Let T be an '''invertible''' linear trans..." So, T is onto(and 1-1), thus "T(beta) spans R(T) = W".<br />
<br />
R: Yes. My trouble was with the fact that invertibility implies onto-ness. I thought that if we had <math> T:P_2 (R)->P_6(R) </math>, and T(f) = xf, then T would still be invertible since you can 'recover' the f if you were given xf. I guess it makes more sense to not call T invertible in this case, because <math>T^{-1}</math> is technically only one-to-one over the range of T.<br />
<br />
R: T has to be both Onto and 1-1 so that it's invertible. In your example, some of the <math> f \in P_6(R)</math> will not be 'recovered' because they weren't mapped from <math> P_2(R)</math>. Furthermore, T<sup>-1</sup> has to map the whole vector space W back to V(as define on p.99) but not the range only. In other words, if T is 1-1 only, <math>T^{-1} \circ T(v) = v, \forall v\in V</math> but <math>T \circ T^{-1}(w) \neq w, \exists w\in W</math>, because some T<sup>-1</sup>(w) are undefined.<br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable, therefore there exists an invertible matrix Q s.t Q<sup>-1</sup>TQ = D, where D is a diagonal matrix. Therefore, Q<sup>-1</sup>(U<Sup>-1</Sup>AU)Q = D iff (UQ)<Sup>-1</Sup>A(UQ) = D (because U and Q invertible, Q<sup>-1</sup>U<Sup>-1</Sup> = (UQ)<Sup>-1</Sup>), it follows that A is diagonalizable.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-12T20:42:55Z<p>Wongpak: /* Sec. 2.4 Lemma p. 101 */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (i.e. solved/unsolved; '''whoever created the question must decide if it is solved'''), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Rank of Matrices===<br />
Q: Prove that if rankA = 0 (for A with dimension m x n), then A is the zero matrix.<br />
(Question is found on p 166 (#3))<br />
Did anyone use transformations in this?<br />
My proof relies on rank(L_A) = 0 (left-multiplication transformation)implying L_A is the zero transformation.<br />
Is there an easier way?<br />
<br />
A: Depends on what you're allowed to assume. If you can use that the rank is equal to the number of linearly independent columns, then clearly all of the columns must be zero (otherwise you have at least one linearly independent vector).<br />
<br />
===Determinants===<br />
Q : If we can make same matrix with 2n-1 times of row swaps, what does it mean ? Does it mean that determinant is 0 ?<br />
<br />
R: Is this related to a question somewhere?<br />
<br />
===Sec 3.2 Ex. 19===<br />
Q: "Let A be an m x n matrix with rank m and B be an n x p matrix with rank n. Determine the rank of AB. Justify your answer." I know how to find that the rank can't be more than m (not much of an accomplishment), but I can't finish it.<br />
<br />
A: According to Theorem 3.7(a),(c)&(d)(p.159), I would say rank(AB) <math>\le </math>min(m, n).<br />
<br />
R: Can we not get any more specific than that?<br />
<br />
===Sec. 3.2 Ex. 21===<br />
Q: "Let A be an m x n matrix with rank m. Prove that there exists an n x m matrix B such that AB=<math>I_m</math>".<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Readings?===<br />
Q: Are we expected to section 5.2 of the textbook? Although the Assignments tell us to read it, we didn't do any questions, or cover it in class.<br />
<br />
A: I highly doubt it; hopefully someone will ask Prof. Bar-Natan tomorrow and post the answer here. There were a few other chapters that had sections we never really talked about either (some applications). Addendum: I second this request for a slight narrowing of what the relevant readings are--for instance, can we be more efficient in our reading of chapter 4 somehow?<br />
<br />
R: I think that if you want to cut down on Chapter 4, then skipping applications of area (discussed very briefly in class) and determinants of order 2 is the most you can do.<br />
<br />
R: What about 4.5, the axiomatic details. It discusses how the determinant is uniquely defined by the three axiomatic properties, but I don't think we did that in class.<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\a_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
R: Thx alot, the matricies are really helpful :)<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
A: The first line of the Lemma states, "Let T be an '''invertible''' linear trans..." So, T is onto(and 1-1), thus "T(beta) spans R(T) = W".<br />
<br />
R: Yes. My trouble was with the fact that invertibility implies onto-ness. I thought that if we had <math> T:P_2 (R)->P_6(R) </math>, and T(f) = xf, then T would still be invertible since you can 'recover' the f if you were given xf. I guess it makes more sense to not call T invertible in this case, because <math>T^{-1}</math> is technically only one-to-one over the range of T.<br />
<br />
R: T has to be both Onto and 1-1 so that it's invertible. In your example, some of the <math> f \in P_6(R)</math> will not be 'recovered' because they weren't mapped from <math> P_2(R)</math>. Furthermore, T<sup>-1</sup> has to map the whole vector space W back to V(as define on p.99) but not the range only. In other words, if T is 1-1 only, <math>T^{-1} \circ T(v) = v, \forall v\in V</math> but <math>T \circ T^{-1}(w) \neq w, \exists w\in W</math>, because some T<sup>-1</sup>(w) are undefined.<br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable, therefore there exists an invertible matrix Q s.t Q<sup>-1</sup>TQ = D, where D is a diagonal matrix. Therefore, Q<sup>-1</sup>(U<Sup>-1</Sup>AU)Q = D iff (UQ)<Sup>-1</Sup>A(UQ) = D (because U and Q invertible, Q<sup>-1</sup>U<Sup>-1</Sup> = (UQ)<Sup>-1</Sup>), it follows that A is diagonalizable.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-12T18:58:03Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (i.e. solved/unsolved; '''whoever created the question must decide if it is solved'''), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Determinants===<br />
Q : If we can make same matrix with 2n-1 times of row swaps, what does it mean ? Does it mean that determinant is 0 ?<br />
<br />
R: Is this related to a question somewhere?<br />
<br />
===Sec 3.2 Ex. 19===<br />
Q: "Let A be an m x n matrix with rank m and B be an n x p matrix with rank n. Determine the rank of AB. Justify your answer." I know how to find that the rank can't be more than m (not much of an accomplishment), but I can't finish it.<br />
<br />
A: According to Theorem 3.7(a),(c)&(d)(p.159), I would say rank(AB) <math>\le </math>min(m, n).<br />
<br />
R: Can we not get any more specific than that?<br />
<br />
===Sec. 3.2 Ex. 21===<br />
Q: "Let A be an m x n matrix with rank m. Prove that there exists an n x m matrix B such that AB=<math>I_m</math>".<br />
<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Readings?===<br />
Q: Are we expected to section 5.2 of the textbook? Although the Assignments tell us to read it, we didn't do any questions, or cover it in class.<br />
<br />
A: I highly doubt it; hopefully someone will ask Prof. Bar-Natan tomorrow and post the answer here. There were a few other chapters that had sections we never really talked about either (some applications). Addendum: I second this request for a slight narrowing of what the relevant readings are--for instance, can we be more efficient in our reading of chapter 4 somehow?<br />
<br />
R: I think that if you want to cut down on Chapter 4, then skipping applications of area (discussed very briefly in class) and determinants of order 2 is the most you can do.<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\a_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
R: Thx alot, the matricies are really helpful :)<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
R: The first line of the Lemma states, "Let T be an '''invertible''' linear trans..." So, T is onto(and 1-1), thus "T(beta) spans R(T) = W".<br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable, therefore there exists an invertible matrix Q s.t Q<sup>-1</sup>TQ = D, where D is a diagonal matrix. Therefore, Q<sup>-1</sup>(U<Sup>-1</Sup>AU)Q = D iff (UQ)<Sup>-1</Sup>A(UQ) = D (because U and Q invertible, Q<sup>-1</sup>U<Sup>-1</Sup> = (UQ)<Sup>-1</Sup>), it follows that A is diagonalizable.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-12T18:47:34Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (i.e. solved/unsolved; '''whoever created the question must decide if it is solved'''), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Determinants===<br />
Q : If we can make same matrix with 2n-1 times of row swaps, what does it mean ? Does it mean that determinant is 0 ?<br />
<br />
R: Is this related to a question somewhere?<br />
<br />
===Sec 3.2 Ex. 19===<br />
Q: "Let A be an m x n matrix with rank m and B be an n x p matrix with rank n. Determine the rank of AB. Justify your answer." I know how to find that the rank can't be more than m (not much of an accomplishment), but I can't finish it.<br />
<br />
A: According to Theorem 3.7(a),(c)&(d)(p.159), I would say rank(AB) <math>\le </math>min(m, n).<br />
<br />
R: Can we not get any more specific than that?<br />
<br />
===Sec. 3.2 Ex. 21===<br />
Q: "Let A be an m x n matrix with rank m. Prove that there exists an n x m matrix B such that AB=<math>I_m</math>".<br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable and U is invertible, therefore A and T is similar, thus A is diagonalizable. Please comment. Thanks.<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Readings?===<br />
Q: Are we expected to section 5.2 of the textbook? Although the Assignments tell us to read it, we didn't do any questions, or cover it in class.<br />
<br />
A: I highly doubt it; hopefully someone will ask Prof. Bar-Natan tomorrow and post the answer here. There were a few other chapters that had sections we never really talked about either (some applications). Addendum: I second this request for a slight narrowing of what the relevant readings are--for instance, can we be more efficient in our reading of chapter 4 somehow?<br />
<br />
R: I think that if you want to cut down on Chapter 4, then skipping applications of area (discussed very briefly in class) and determinants of order 2 is the most you can do.<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\a_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
R: Thx alot, the matricies are really helpful :)<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
R: The first line of the Lemma states, "Let T be an '''invertible''' linear trans..." So, T is onto(and 1-1), thus "T(beta) spans R(T) = W".</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-12T12:19:31Z<p>Wongpak: /* Sec 3.2 Ex. 19 */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (i.e. solved/unsolved; '''whoever created the question must decide if it is solved'''), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Sec 3.2 Ex. 19===<br />
Q: "Let A be an m x n matrix with rank m and B be an n x p matrix with rank n. Determine the rank of AB. Justify your answer." I know how to find that the rank can't be more than m (not much of an accomplishment), but I can't finish it.<br />
<br />
A: According to Theorem 3.7(a),(c)&(d)(p.159), I would say rank(AB) <math>\le </math>min(m, n).<br />
<br />
===Sec. 3.2 Ex. 21===<br />
Q: "Let A be an m x n matrix with rank m. Prove that there exists an n x m matrix B such that AB=<math>I_m</math>".<br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable and U is invertible, therefore A and T is similar, thus A is diagonalizable. Please comment. Thanks.<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Readings?===<br />
Q: Are we expected to section 5.2 of the textbook? Although the Assignments tell us to read it, we didn't do any questions, or cover it in class.<br />
<br />
A: I highly doubt it; hopefully someone will ask Prof. Bar-Natan tomorrow and post the answer here. There were a few other chapters that had sections we never really talked about either (some applications).<br />
<br />
===determinants===<br />
Q : If we can make same matrix with 2n-1 times of row swaps, what does it mean ? Does it mean that determinant is 0 ?<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\ia_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
R: Thx alot, the matricies are really helpful :)</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-11T21:56:25Z<p>Wongpak: /* Exam April/May 2006 #7 */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (solved/unsolved, as judge by whoever created the question), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\ia_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
A: I think we need to prove that UTU<Sup>-1</Sup> is diagonalizable instead of proving UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>. <br />
<br />
I started by letting A = UTU<Sup>-1</Sup>, then multiplying U<Sup>-1</Sup> and U to the both sides, we get U<Sup>-1</Sup>AU = U<Sup>-1</Sup>UTU<Sup>-1</Sup>U iff U<Sup>-1</Sup>AU = T. Since T is diagonalizable and U is invertible, therefore A and T is similar, thus A is diagonalizable. Please comment. Thanks.<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-11T21:31:18Z<p>Wongpak: /* Exam April/May 2006 #3(b) */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (solved/unsolved, as judge by whoever created the question), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Exam April/May 2006 #3(b)===<br />
Q: Let T : M<Sub>3x2</Sub>(C) -> M<Sub>2x3</Sub>(C) be defined as follows. Given A Є M<Sub>3x2</Sub>(C), let B be the matrix obtained from A by adding i times the second row of A to the third row of A. Let T(A) = B<Sup>t</Sup>, where B<Sup>t</Sup> is the transpose of B. (Note: Here, i is a complex number such that i<Sup>2</Sup> = -1.) Determine whether the linear transformation T is invertible.<br />
<br />
Totally lost on this question :/ Please show some example matrix and how it is transformed as the question asks if possible. I want to see what actually happens to the elements in the matrix rather than the answer (think that would be more important)<br />
<br />
A(Matrix Elements): <br />
This is my interpretation:<br />
<br />
A = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math> where <math>a_{ij} \in C</math>, then B = <math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\ia_{21}+a_{31}&ia_{22}+a_{32}\end{pmatrix}</math>.<br />
<br />
Therefore, T(A) = B<Sup>t</Sup> is T<math>\begin{pmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\\a_{31}&a_{32}\end{pmatrix}</math>=<math>\begin{pmatrix}a_{11}&a_{21}&ia_{21}+a_{31}\\ia_{12}&a_{22}&ia_{22}+a_{32}\end{pmatrix}</math><br />
<br />
===Exam April/May 2006 #7===<br />
Q: Let T : V -> V and U : V -> V be linear operators on a finite-dimensional vector space V. Assume that U is invertible and T is diagonalizable. Prove that the linear operator UTU<Sup>-1</Sup> = U o T o U<Sup>-1</Sup>.<br />
<br />
I dont know where or how to start this question ><.<br />
<br />
===Sec. 2.4 Lemma p. 101===<br />
Q: In the proof of the lemma, the second line, we have "T(beta) spans R(T) = W". How do we know that R(T) = W? This would be true if dim V = dim W, because then T would be onto, but we can't assume what we're trying to prove.<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
Reply: Oh I see... now it looks so obvious =/. Thanks.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Final_Exam_Preparation_Forum06-240/Final Exam Preparation Forum2006-12-11T17:03:48Z<p>Wongpak: /* Sec. 1.3 Thm 1.3 Proof */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
If you have questions, ask them here and hopefully someone else will know the answer. (Answering questions will probably help you understand it more).<br />
<br />
Since many of us (including I) don't really know how to use Wiki's, I suggest that we keep the formatting simple: I will post a template at the top of this page, and if you want to add something just click on the "edit", copy the template, and insert your question. Order the questions according to section (solved/unsolved, as judge by whoever created the question), with the newest at the top, except for the template question. In general, I wouldn't retype the question if it's from the book because that's tedious and we all have the book.<br />
<br />
(By the way, I think you leave a space between lines in the code to make a new line; that is, simply pressing enter once will not make a new line. Also, you can press a button at the top of the editing textbox that lets you put in simple equations.)<br />
<br />
==Unsolved Questions==<br />
<br />
===Question Template===<br />
Q: Can someone help me prove: "If an [[integer]] <math>n</math> is greater than 2, then <math>a^n + b^n = c^n</math> has no solutions in non-zero integers <math>a</math>, <math>b</math>, and <math>c</math>."? I had the answer in my head at one point, but the margins of the piece of paper I was working with was too small to fit it.<br />
<br />
===Exam April/May 2006 #4===<br />
Q: Suppose that A, B Є M<sub>mxn</sub>(F), and rank(A) = rank (B). Prove that there exist invertible matrices P Є M<sub>mxm</sub>(F) and Q Є M<sub>nxn</sub>(F) such that B = PAQ.<br />
<br />
A(partial): Here is a sketch. If you rref A and B by applying a series of elementary row operation matricies, they will both look similar. That is, they will have a section of 1's and 0's (each 1 is the only number in its column) and then a section of "remaining stuff", and these sections will be the same "size" because their ranks are the same. Then, using the elementary column matrix operations, you can essentially modify the "remaining stuff" as much as you like, by adding multiples of the "nice" columns (with single-1's). These row and column operations can then be grouped nicely and set to be equal to P and Q, which are invertible because products of elementary matricies are invertible. <br />
<br />
I know this is very rough, but even if I did have a full answer I wouldn't now how to typeset it.<br />
<br />
===Complex Numbers===<br />
Q: If 'C' is used in the context of a vector space (as in "define T:C->C"), then should we consider C to be the vector space of C over the field C, or instead C over the field R?<br />
<br />
===Sec. 1.6, Ex. 29 a.===<br />
Q: Does anyone know an efficient way of doing this?<br />
<br />
===Sec. 1.3 Thm 1.3 Proof===<br />
Q: In the first paragraph of the proof, it says "But also x + 0 = x , and thus 0'=0." How do we know 0 (that is 0 of V) even exists in W? I understand that we know ''some'' zero exists (0'), but not why ''the'' zero (0) exists.<br />
<br />
A: x is in W as well in V. Thus, x + 0 = x (VS 3).<br />
<br />
==Solved Questions==<br />
<br />
===Question Template===<br />
Q: How many ways are there to get to the nth stair, if at each step you can move either one or two squares up?<br />
<br />
A: This question can be easily modeled by the Fibonacci numbers, with the nth number being the ways to get to the nth stair. This is because, to get to the nth stair, you can come only from the n-1th or the n-2th. This is exactly how the Fibonacci numbers are defined; the proof is simple by induction.</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Classnotes_For_Tuesday_November_14Talk:06-240/Classnotes For Tuesday November 142006-11-16T04:55:26Z<p>Wongpak: </p>
<hr />
<div>Reduced row echelon form - Is there a reason to make column with entry 1 to the form of e<sub>n</sub> (1 at n<sup>th</sup> row, 0 for all other entries)? According to some books, matrix <math>\begin{pmatrix}1&3&2&4&2\\0&1&2&3&4\\0&0&0&1&2\\0&0&0&0&0 \end{pmatrix}</math> is good enough to show that the rank of the matrix is 3. This is because the first three rows are linearly independent, they can't form linear combination for preceeding rows. Anyone could please explain why we have to reduce it to <math>\begin{pmatrix}1&0&-4&0&-2\\0&1&2&0&-2\\0&0&0&1&2\\0&0&0&0&0 \end{pmatrix}</math>? Thank you. [[User:Wongpak|Wongpak]] 23:54, 15 November 2006 (EST)</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Classnotes_For_Tuesday_November_14Talk:06-240/Classnotes For Tuesday November 142006-11-16T04:54:22Z<p>Wongpak: </p>
<hr />
<div>Reduced row echelon form - Is there a reason to make column with entry 1 to the form of e<sub>n</sub> (1 at n<sup>th</sup> row, 0 for all other entries)? According to some books, matrix <math>\begin{pmatrix}1&3&2&4&2\\0&1&2&3&4\\0&0&0&1&2\\0&0&0&0&0 \end{pmatrix}</math> is good enough to show that the rank of the matrix is 3. This is because the first three rows are linearly independent, they can't form linear combination for preceeding rows. Anyone could tell why we have to reduce it to <math>\begin{pmatrix}1&0&-4&0&-2\\0&1&2&0&-2\\0&0&0&1&2\\0&0&0&0&0 \end{pmatrix}</math>? Thank you. [[User:Wongpak|Wongpak]] 23:54, 15 November 2006 (EST)</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday_October_3106-240/Classnotes For Tuesday October 312006-11-03T19:03:32Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
[[Media:06-240-31.Nov.06.pdf|Oct31-Lecture notes]]</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_for_Thursday,_September_1406-240/Classnotes for Thursday, September 142006-10-25T02:35:15Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
==Scan of Lecture Notes==<br />
* PDF notes by [[User:Harbansb]]: [[Media:06-240-0914.pdf|September 14 Notes]].<br />
* If I have made an error in my notes, or you would like the editable OpenOffice file, feel free to e-mail me at harbansb@msn.com.<br />
* PDF notes by [[User:Alla]]: [[Media:MAT_Lect002.pdf|Week 1 Lecture 2 notes]]<br />
* PDF notes by [[User:Gokmen]]: [[Media:06-240-14-September.pdf|Week 1 Lecture 2 notes]]<br />
<br />
==Scan of Tutorial Notes==<br />
* PDF notes by [[User:Alla]]: [[Media:MAT_Tut001.pdf|Week 1 Tutorial notes]]<br />
* PDF notes by [[User:Gokmen]]: [[Media:06-240-14-sept-tutorial.pdf|Week 1 Tutorial notes]]</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday,_September_2106-240/Classnotes For Thursday, September 212006-10-25T02:34:18Z<p>Wongpak: /* Scan of Lecture Notes */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
==Scan of Lecture Notes==<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect004.pdf|Week 2 Lecture 2 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-Lecture-21-september.pdf|Week 2 Lecture 2 notes]]<br />
<br />
==Scan of Tutorial notes==<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Tut002.pdf|Week 2 Tutorial notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-tutorial-21-september.pdf|Week 2 Tutorial notes]]<br />
<br />
==Force Vectors==<br />
A force has a direction and a magnitude.<br />
#<math>\mbox{There is a special force vector called 0.}</math><br />
#<math>\mbox{They can be added.}</math><br />
#<math>\mbox{They can be multiplied by any scalar.}</math><br />
<br />
====''Properties''====<br />
<br />
<math>\mbox{(convention: }x,y,z\mbox{ }\mbox{ are vectors; }a,b,c\mbox{ }\mbox{ are scalars)}</math><br />
#<math> x+y=y+x \ </math><br />
#<math> x+(y+z)=(x+y)+z \ </math><br />
#<math> x+0=x \ </math><br />
#<math> \forall x\; \exists\ y \ \mbox{ s.t. }x+y=0</math><br />
#<math> 1\cdot x=x \ </math><br />
#<math> a(bx)=(ab)x \ </math><br />
#<math> a(x+y)=ax+ay \ </math><br />
#<math> (a+b)x=ax+bx \ </math><br />
<br />
=====Definition===== <br />
<br />
Let F be a field "of scalars". A vector space over F is a set V, of "vectors", along with two operations<br />
<br />
: <math> +: V \times V \to V </math><br />
: <math> \cdot: F \times V \to V \mbox{, so that:}</math><br />
#<math> \forall x,y \in V\ x+y=y+x </math><br />
#<math> \forall x,y \in V\ x+(y+z)=(x+y)+z </math><br />
#<math> \exists\ 0 \in V s.t.\; \forall x \in V\ x+0=x </math><br />
#<math> \forall x \in V\; \exists\ y \in V\ s.t. \ x+y=0</math><br />
#<math> 1\cdot x=x\ </math><br />
#<math> a(bx)=(ab)x\ </math><br />
#<math> a(x+y)=ax+ay\ </math><br />
#<math> \forall x \in V\ ,\forall a,b \in F\ (a+b)x=ax+bx </math><br />
-----<br />
9. <math> x \mapsto \vert x\vert \in \mathbb{R} \ \vert x+y\vert \le \vert x\vert+\vert y\vert </math><br />
====''Examples''====<br />
'''Ex.1.'''<br />
<math> F^n= \lbrace(a_1,a_2,a_3,\ldots,a_{n-1},a_n):\forall i\ a_i \in F \rbrace </math> <br/><br />
<math> n \in \mathbb{Z}\ , n \ge 0 </math> <br/><br />
<math> x=(a_1,\ldots,a_2)\ y=(b_1,\ldots, b_2)\ </math> <br/><br />
<math> x+y:=(a_1+b_1,a_2+b_2,\ldots,a_n+b_n)\ </math> <br/><br />
<math> 0_{F^n}=(0,\ldots,0) </math> <br/><br />
<math> a\in F\ ax=(aa_1,aa_2,\ldots,aa_n) </math> <br/><br />
<math> \mbox{In } \mathbb{Q}^3 \ \left( \frac{3}{2},-2,7\right)+\left( \frac{-3}{2}, \frac{1}{3},240\right)=\left(0, \frac{-5}{3},247\right) </math> <br/><br />
<math> 7\left( \frac{1}{5},\frac{1}{7},\frac{1}{9}\right)=\left( \frac{7}{5},1,\frac{7}{9}\right) </math> <br/><br />
'''Ex.2.'''<br />
<math> V=M_{m\times n}(F)=\left\lbrace\begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & <br />
& \vdots \\ a_{m1} & \cdots & a_{mn}\end{pmatrix}: a_{ij} \in F \right\rbrace </math> <br/><br />
<math> M_{3\times 2}( \mathbb{R})\ni \begin{pmatrix} 7 & -7 \\ \pi & \mathit{e} \\ -5 & 2 \end{pmatrix} </math> <br/><br />
<math>\mbox{Addition by adding entry by entry:}</math><br />
<br />
<math> M_{2\times 2}\ \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}+\begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix}=\begin{pmatrix} {a_{11}+b_{11}} & {a_{12}+b_{12}} \\ {a_{21}+b_{21}} & {a_{22}+b_{22}} \end{pmatrix}</math> <br/><br />
<br />
<math>\mbox{Multiplication by multiplying scalar c to all entries by M.}</math><br />
<br />
<math> c\cdot M_{2\times 2}\ \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}=\begin{pmatrix} c\cdot a_{11} & c\cdot a_{12} \\ c\cdot a_{21} & c\cdot a_{22} \end{pmatrix}</math> <br/> <br/><br />
<br />
<math>\mbox{Zero matrix has all entries = 0:}</math><br />
<br />
<math> 0_{M_{m\times n}}=\begin{pmatrix} 0 & \cdots & 0 \\ \vdots & <br />
& \vdots \\ 0 & \cdots & 0\end{pmatrix} </math> <br/><br />
'''Ex.3.'''<br />
<math> \mathbb{C}</math> form a vector space over <math> \mathbb{R}</math>. <br/><br />
'''Ex.4.'''<br />
<math>\mbox{F is a vector space over itself.}</math> <br/><br />
'''Ex.5.'''<br />
<math> \mathbb{R}</math> is a vector space over <math> \mathbb{Q}</math>. <br/><br />
'''Ex.6.'''<br />
<math>\mbox{Let S be a set. Let}</math> <br/><br />
<math> \mathcal{F}(S,\mathbb{R})=\big\{f:S\to \mathbb{R} \big\} </math> <br/><br />
<math> f,g \in \mathcal{F}(S,\mathbb{R}) </math> <br/><br />
<math> (f+g)(t)=f(t)+g(t)\ for\ any\ t\in S </math> <br/><br />
<math> (af)(t)=a\cdot f(t)\ </math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday,_September_2106-240/Classnotes For Thursday, September 212006-10-25T02:32:16Z<p>Wongpak: /* Scan of Lecture Notes */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
==Scan of Lecture Notes==<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect004.pdf|Week 2 Lecture 2 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-Lecture-21-september.pdf|Week 2 Lecture notes]]<br />
<br />
==Scan of Tutorial notes==<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Tut002.pdf|Week 2 Tutorial notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-tutorial-21-september.pdf|Week 2 Tutorial notes]]<br />
<br />
==Force Vectors==<br />
A force has a direction and a magnitude.<br />
#<math>\mbox{There is a special force vector called 0.}</math><br />
#<math>\mbox{They can be added.}</math><br />
#<math>\mbox{They can be multiplied by any scalar.}</math><br />
<br />
====''Properties''====<br />
<br />
<math>\mbox{(convention: }x,y,z\mbox{ }\mbox{ are vectors; }a,b,c\mbox{ }\mbox{ are scalars)}</math><br />
#<math> x+y=y+x \ </math><br />
#<math> x+(y+z)=(x+y)+z \ </math><br />
#<math> x+0=x \ </math><br />
#<math> \forall x\; \exists\ y \ \mbox{ s.t. }x+y=0</math><br />
#<math> 1\cdot x=x \ </math><br />
#<math> a(bx)=(ab)x \ </math><br />
#<math> a(x+y)=ax+ay \ </math><br />
#<math> (a+b)x=ax+bx \ </math><br />
<br />
=====Definition===== <br />
<br />
Let F be a field "of scalars". A vector space over F is a set V, of "vectors", along with two operations<br />
<br />
: <math> +: V \times V \to V </math><br />
: <math> \cdot: F \times V \to V \mbox{, so that:}</math><br />
#<math> \forall x,y \in V\ x+y=y+x </math><br />
#<math> \forall x,y \in V\ x+(y+z)=(x+y)+z </math><br />
#<math> \exists\ 0 \in V s.t.\; \forall x \in V\ x+0=x </math><br />
#<math> \forall x \in V\; \exists\ y \in V\ s.t. \ x+y=0</math><br />
#<math> 1\cdot x=x\ </math><br />
#<math> a(bx)=(ab)x\ </math><br />
#<math> a(x+y)=ax+ay\ </math><br />
#<math> \forall x \in V\ ,\forall a,b \in F\ (a+b)x=ax+bx </math><br />
-----<br />
9. <math> x \mapsto \vert x\vert \in \mathbb{R} \ \vert x+y\vert \le \vert x\vert+\vert y\vert </math><br />
====''Examples''====<br />
'''Ex.1.'''<br />
<math> F^n= \lbrace(a_1,a_2,a_3,\ldots,a_{n-1},a_n):\forall i\ a_i \in F \rbrace </math> <br/><br />
<math> n \in \mathbb{Z}\ , n \ge 0 </math> <br/><br />
<math> x=(a_1,\ldots,a_2)\ y=(b_1,\ldots, b_2)\ </math> <br/><br />
<math> x+y:=(a_1+b_1,a_2+b_2,\ldots,a_n+b_n)\ </math> <br/><br />
<math> 0_{F^n}=(0,\ldots,0) </math> <br/><br />
<math> a\in F\ ax=(aa_1,aa_2,\ldots,aa_n) </math> <br/><br />
<math> \mbox{In } \mathbb{Q}^3 \ \left( \frac{3}{2},-2,7\right)+\left( \frac{-3}{2}, \frac{1}{3},240\right)=\left(0, \frac{-5}{3},247\right) </math> <br/><br />
<math> 7\left( \frac{1}{5},\frac{1}{7},\frac{1}{9}\right)=\left( \frac{7}{5},1,\frac{7}{9}\right) </math> <br/><br />
'''Ex.2.'''<br />
<math> V=M_{m\times n}(F)=\left\lbrace\begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & <br />
& \vdots \\ a_{m1} & \cdots & a_{mn}\end{pmatrix}: a_{ij} \in F \right\rbrace </math> <br/><br />
<math> M_{3\times 2}( \mathbb{R})\ni \begin{pmatrix} 7 & -7 \\ \pi & \mathit{e} \\ -5 & 2 \end{pmatrix} </math> <br/><br />
<math>\mbox{Addition by adding entry by entry:}</math><br />
<br />
<math> M_{2\times 2}\ \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}+\begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix}=\begin{pmatrix} {a_{11}+b_{11}} & {a_{12}+b_{12}} \\ {a_{21}+b_{21}} & {a_{22}+b_{22}} \end{pmatrix}</math> <br/><br />
<br />
<math>\mbox{Multiplication by multiplying scalar c to all entries by M.}</math><br />
<br />
<math> c\cdot M_{2\times 2}\ \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}=\begin{pmatrix} c\cdot a_{11} & c\cdot a_{12} \\ c\cdot a_{21} & c\cdot a_{22} \end{pmatrix}</math> <br/> <br/><br />
<br />
<math>\mbox{Zero matrix has all entries = 0:}</math><br />
<br />
<math> 0_{M_{m\times n}}=\begin{pmatrix} 0 & \cdots & 0 \\ \vdots & <br />
& \vdots \\ 0 & \cdots & 0\end{pmatrix} </math> <br/><br />
'''Ex.3.'''<br />
<math> \mathbb{C}</math> form a vector space over <math> \mathbb{R}</math>. <br/><br />
'''Ex.4.'''<br />
<math>\mbox{F is a vector space over itself.}</math> <br/><br />
'''Ex.5.'''<br />
<math> \mathbb{R}</math> is a vector space over <math> \mathbb{Q}</math>. <br/><br />
'''Ex.6.'''<br />
<math>\mbox{Let S be a set. Let}</math> <br/><br />
<math> \mathcal{F}(S,\mathbb{R})=\big\{f:S\to \mathbb{R} \big\} </math> <br/><br />
<math> f,g \in \mathcal{F}(S,\mathbb{R}) </math> <br/><br />
<math> (f+g)(t)=f(t)+g(t)\ for\ any\ t\in S </math> <br/><br />
<math> (af)(t)=a\cdot f(t)\ </math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday,_September_1906-240/Classnotes For Tuesday, September 192006-10-25T02:31:36Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
* Notes by [[User:Aierinc]]: [http://katlas.math.toronto.edu/drorbn/index.php?title=Image:Lecture3_1.jpg September 19 Note (1/5)]<br />
* Notes by [[User:Aierinc]]: [http://katlas.math.toronto.edu/drorbn/index.php?title=Image:Lecture3_2.jpg September 19 Note (2/5)]<br />
* Notes by [[User:Aierinc]]: [http://katlas.math.toronto.edu/drorbn/index.php?title=Image:Lecture3_3.jpg September 19 Note (3/5)]<br />
* Notes by [[User:Aierinc]]: [http://katlas.math.toronto.edu/drorbn/index.php?title=Image:Lecture3_4.jpg September 19 Note (4/5)]<br />
* Notes by [[User:Aierinc]]: [http://katlas.math.toronto.edu/drorbn/index.php?title=Image:Lecture3_5.jpg September 19 Note (5/5)]<br />
* PDF file by [[User: Alla]]: [[Media:MAT_Lect003.pdf|Week 2 Lecture 1 notes]]<br />
* PDF file by [[User: Gokmen]]: [[Media:06-240-Lecture-19-september.pdf|Week 2 Lecture 1 notes]]</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday,_September_2106-240/Classnotes For Thursday, September 212006-10-25T02:30:41Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
==Scan of Lecture Notes==<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect004.pdf|Week 2 Lecture 2 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-Lecture-21-september.pdf|Week 2 Tutorial notes]]<br />
<br />
==Scan of Tutorial notes==<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Tut002.pdf|Week 2 Tutorial notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-tutorial-21-september.pdf|Week 2 Tutorial notes]]<br />
<br />
==Force Vectors==<br />
A force has a direction and a magnitude.<br />
#<math>\mbox{There is a special force vector called 0.}</math><br />
#<math>\mbox{They can be added.}</math><br />
#<math>\mbox{They can be multiplied by any scalar.}</math><br />
<br />
====''Properties''====<br />
<br />
<math>\mbox{(convention: }x,y,z\mbox{ }\mbox{ are vectors; }a,b,c\mbox{ }\mbox{ are scalars)}</math><br />
#<math> x+y=y+x \ </math><br />
#<math> x+(y+z)=(x+y)+z \ </math><br />
#<math> x+0=x \ </math><br />
#<math> \forall x\; \exists\ y \ \mbox{ s.t. }x+y=0</math><br />
#<math> 1\cdot x=x \ </math><br />
#<math> a(bx)=(ab)x \ </math><br />
#<math> a(x+y)=ax+ay \ </math><br />
#<math> (a+b)x=ax+bx \ </math><br />
<br />
=====Definition===== <br />
<br />
Let F be a field "of scalars". A vector space over F is a set V, of "vectors", along with two operations<br />
<br />
: <math> +: V \times V \to V </math><br />
: <math> \cdot: F \times V \to V \mbox{, so that:}</math><br />
#<math> \forall x,y \in V\ x+y=y+x </math><br />
#<math> \forall x,y \in V\ x+(y+z)=(x+y)+z </math><br />
#<math> \exists\ 0 \in V s.t.\; \forall x \in V\ x+0=x </math><br />
#<math> \forall x \in V\; \exists\ y \in V\ s.t. \ x+y=0</math><br />
#<math> 1\cdot x=x\ </math><br />
#<math> a(bx)=(ab)x\ </math><br />
#<math> a(x+y)=ax+ay\ </math><br />
#<math> \forall x \in V\ ,\forall a,b \in F\ (a+b)x=ax+bx </math><br />
-----<br />
9. <math> x \mapsto \vert x\vert \in \mathbb{R} \ \vert x+y\vert \le \vert x\vert+\vert y\vert </math><br />
====''Examples''====<br />
'''Ex.1.'''<br />
<math> F^n= \lbrace(a_1,a_2,a_3,\ldots,a_{n-1},a_n):\forall i\ a_i \in F \rbrace </math> <br/><br />
<math> n \in \mathbb{Z}\ , n \ge 0 </math> <br/><br />
<math> x=(a_1,\ldots,a_2)\ y=(b_1,\ldots, b_2)\ </math> <br/><br />
<math> x+y:=(a_1+b_1,a_2+b_2,\ldots,a_n+b_n)\ </math> <br/><br />
<math> 0_{F^n}=(0,\ldots,0) </math> <br/><br />
<math> a\in F\ ax=(aa_1,aa_2,\ldots,aa_n) </math> <br/><br />
<math> \mbox{In } \mathbb{Q}^3 \ \left( \frac{3}{2},-2,7\right)+\left( \frac{-3}{2}, \frac{1}{3},240\right)=\left(0, \frac{-5}{3},247\right) </math> <br/><br />
<math> 7\left( \frac{1}{5},\frac{1}{7},\frac{1}{9}\right)=\left( \frac{7}{5},1,\frac{7}{9}\right) </math> <br/><br />
'''Ex.2.'''<br />
<math> V=M_{m\times n}(F)=\left\lbrace\begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & <br />
& \vdots \\ a_{m1} & \cdots & a_{mn}\end{pmatrix}: a_{ij} \in F \right\rbrace </math> <br/><br />
<math> M_{3\times 2}( \mathbb{R})\ni \begin{pmatrix} 7 & -7 \\ \pi & \mathit{e} \\ -5 & 2 \end{pmatrix} </math> <br/><br />
<math>\mbox{Addition by adding entry by entry:}</math><br />
<br />
<math> M_{2\times 2}\ \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}+\begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix}=\begin{pmatrix} {a_{11}+b_{11}} & {a_{12}+b_{12}} \\ {a_{21}+b_{21}} & {a_{22}+b_{22}} \end{pmatrix}</math> <br/><br />
<br />
<math>\mbox{Multiplication by multiplying scalar c to all entries by M.}</math><br />
<br />
<math> c\cdot M_{2\times 2}\ \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}=\begin{pmatrix} c\cdot a_{11} & c\cdot a_{12} \\ c\cdot a_{21} & c\cdot a_{22} \end{pmatrix}</math> <br/> <br/><br />
<br />
<math>\mbox{Zero matrix has all entries = 0:}</math><br />
<br />
<math> 0_{M_{m\times n}}=\begin{pmatrix} 0 & \cdots & 0 \\ \vdots & <br />
& \vdots \\ 0 & \cdots & 0\end{pmatrix} </math> <br/><br />
'''Ex.3.'''<br />
<math> \mathbb{C}</math> form a vector space over <math> \mathbb{R}</math>. <br/><br />
'''Ex.4.'''<br />
<math>\mbox{F is a vector space over itself.}</math> <br/><br />
'''Ex.5.'''<br />
<math> \mathbb{R}</math> is a vector space over <math> \mathbb{Q}</math>. <br/><br />
'''Ex.6.'''<br />
<math>\mbox{Let S be a set. Let}</math> <br/><br />
<math> \mathcal{F}(S,\mathbb{R})=\big\{f:S\to \mathbb{R} \big\} </math> <br/><br />
<math> f,g \in \mathcal{F}(S,\mathbb{R}) </math> <br/><br />
<math> (f+g)(t)=f(t)+g(t)\ for\ any\ t\in S </math> <br/><br />
<math> (af)(t)=a\cdot f(t)\ </math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday_September_2606-240/Classnotes For Tuesday September 262006-10-25T02:26:28Z<p>Wongpak: /* Links to Classnotes */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
===Links to Classnotes===<br />
* Classnote for Tuesday Sept 26 [http://www.megaupload.com/?d=4L41DERL]<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect005.pdf|Week 3 Lecture 1 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-Lecture-26-september.pdf|Week 3 Lecture 1 notes]]<br />
<br />
----<br />
<br />
===Vector Spaces===<br />
<br />
'''Example 5.''' <br><br />
<br />
<math>\mbox{Polynomials:}{}_{}^{}</math> <br><br />
<br />
<math>7x^3+9x^2-2x+\pi\ </math> <br><br />
<br />
<math>\mbox{Let } \mathcal{F }\ \mbox{be a field.}</math> <br><br />
<br />
<math>P(\mathcal{F})=\bigg\{ \sum_{i=1}^n a_i x^i :n \in \mathbb{Z}\,\ n\ge 0\ \forall i\ \ a_i \in \mathcal{F} \bigg\} {}_{}^{} </math><br />
<br />
<math> \mbox{Addition of polynomials is defined in the expected way:}{}_{}^{} </math> <br><br />
<br />
<math> \sum_{i=0}^n a_i x^i + \sum_{i=1}^m b_i x^i =\sum_{i=0}^{max(m,n)}{(a_i+b_i)} x^i </math> <br><br />
<br />
<br />
'''Theorem 1.'''(Cancellation law for vector spaces)<br><br />
<br />
<math> \mbox{If in a vector space x+z=y+z then x=y.}{}_{}^{} </math> <br><br />
<br />
'''Proof:'''<br><br />
<br />
<math> \mbox{Add w to both sides of a given equation where w is an element}{}_{}^{} </math> <br />
<math> \mbox{for which z+w=0 (exists by VS4)}{}_{}^{} </math> <br><br />
<br />
<math>(x+y)+w=(y+z)+w \ </math> <br><br />
<br />
<math> x+(z+w)=y+(z+w)\ \mbox{(by VS2)} {}_{}^{} </math> <br><br />
<br />
<math> x+0=y+0\ \mbox{(by the choice of w)} {}_{}^{} </math> <br><br />
<br />
<math> x=y\ \mbox{(by VS3)} {}_{}^{} </math> <br><br />
<br />
<br />
'''Theorem 2.''' "0 is unique" <br><br />
<br />
<math> \mbox{If some z}\in\mbox{V satisfies x+z=0 for some x}\in \mbox{V then z=0.} {}_{}^{} </math> <br><br />
<br />
'''Proof:"<br><br />
<br />
<math>x+z=x+0\ </math> <br><br />
<br />
<math>z+x=0+x\ </math> <br><br />
<math>z=0\ </math> <br><br />
<br />
<br />
'''Theorem 3.''' "negatives are unique"<br><br />
<br />
<math> \mbox{If x+y=0 and x+z=0 then y=z.} {}_{}^{} </math> <br><br />
<br />
<br />
'''Theorem 4.'''<br><br />
<br />
a)<math>0_F.x=0_V\ </math> <br><br />
<br />
b)<math>a.0_V=0_V\ </math> <br><br />
<br />
c)<math>(-a)x=a(-x)=-(ax)\ </math> <br><br />
<br />
<br />
'''Theorem 5.'''<br><br />
<br />
<math> \mbox{If } x_i\ \mbox{ i=1,...,n are in V then } \sum {x_i}=x_1+x_2+...+x_n\ \mbox{ makes sense whichever way you parse it.} {}_{}^{} </math> <br><br />
<br />
<math> \mbox{(From VS1 and VS2)} {}_{}^{} </math> <br><br />
----<br />
===Subspaces===<br />
<br />
'''Definition'''<br><br />
<br />
<math> \mbox{Let V be a vector space. A subspace of V is a subset W of V which is a vector space in itself under the operations is inherits from V.}{}_{}^{} </math> <br><br />
<br />
'''Theorem'''<br><br />
<br />
<math>W\subset V\ \mbox{is a subspace of V iff}{}_{}^{} </math> <br><br />
<br />
#<math>\forall x,y\in W\ \ x+y\in W \ </math><br />
#<math> \forall a\in F,\ \forall x\in W\ \ ax\in W\ </math><br />
#<math>0 \in W\ </math> <br><br />
<br />
'''Proof'''<br><br />
<math>\Rightarrow </math> <br><br />
<br />
<math>\mbox{Assume W is a subspace. If x,y} \in \mbox{W then x+y} \in \mbox{W because W is a vector space in itself. Likewise for a.w.}{}_{}^{} </math> <br><br />
<br />
<math>\Leftarrow </math> <br><br />
<br />
<math>\mbox{Assume W}\subset \mbox{V for which } x,y\in W\Rightarrow x+y\in W\ ; x\in W, a\in F \Rightarrow ax\in W.\ {}_{}^{} </math> <br><br />
<br />
<math>\mbox{We need to show that W is a vector space. Addition and multiplication are clearly defined on W so we just need to check VS1-VS8.}{}_{}^{} </math> <br><br />
<br />
<math> \mbox{Indeed, VS1, VS2, VS5, VS6, VS7, and VS8 hold in V hence in W.}{}_{}^{} </math> <br><br />
<br />
<math> \mbox{VS3-pick any x}\in W\ \ 0=0.x\in W\ by\ 2.\ {}_{}^{} </math> <br><br />
<br />
<math> \mbox{VS4-given x in W, take y=(-1).x}\in W\ and\ x+y=0.\ {}_{}^{} </math> <br><br />
<br />
<br />
<u>Examples</u><br><br />
<br />
'''Example 1.'''<br><br />
<br />
'''Definition'''<br><br />
<br />
<math> \mbox{If A}\in M_{m\times n}(F) \mbox{ the transpose of A, } A^t \mbox{ is the matrix } (A^t)_{ij}:=A_{ji}. {}_{}^{} </math> <br><br />
<br />
<math> \begin{pmatrix} 2 & 3 & \pi\ \\ 7 & 8 & -2 \end{pmatrix}^t = \begin{pmatrix} 2 & 7 \\ 3 & 8 \\ \pi\ & -2 \end{pmatrix} </math> <br><br />
<br />
<math> \mbox{Then:} {}_{}^{} </math> <br><br />
<br />
#<math>A^t \in M_{n\times m}(F)\ </math> <br><br />
#<math>(A^t)^t=A\ </math> <br><br />
#<math>(A+B)^t=A^t+B^t\ </math> <br><br />
#<math>(cA)^t=c(A^t)\ \forall c\in F\ </math> <br><br />
<br />
'''Definition'''<br><br />
<br />
<math>A\in M_{n\times n}(F) \mbox{ is called symmetric if } A^t=A. \ {}_{}^{} </math> <br><br />
<br />
<u>Claim</u><br><br />
<br />
<math>V=M_{n\times n}(F) \ \mbox{ is a vector space. Let } \ W=\big\{ \mbox{symmetric A-s in V}\big\} = \big\{ A\in V: A^t=A \big\}\ \mbox{ then W is a subspace of V.} {}_{}^{} </math> <br><br />
<br />
<u>Proof</u><br><br />
<br />
<br />
1.<math> \mbox{Need to show that if } A\in W and\ B\in W\ then\ A+B\in W. \ {}_{}^{} </math> <br><br />
<br />
<math>A^t=A,\ B^t=B \ </math> <br><br />
<br />
<math>(A+B)^t=A^t+B^t=A+B\ so\ A+B\in W. </math> <br><br />
<br />
<math>\mbox{If } A\in W,\ c\in F \mbox{ need to show } cA\in W {}_{}^{} </math> <br><br />
<br />
<math>(cA)^t=cA^t=cA\ \Rightarrow cA\in W </math> <br><br />
<br />
3.<math>0_M=\begin{pmatrix} 0 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & 0\end{pmatrix} \Rightarrow 0^t=0 \ so \ 0\in W</math> <br><br />
<br />
'''Example 2.'''<br><br />
<br />
<math>V=M_{n\times n}(F) </math> <br><br />
<br />
<math>A=A_{ij}\ \ trA=\sum_{i=1}^n A_{ii}\ \mbox{(the trace of A)} </math> <br><br />
<br />
<math> \mbox{Properties of tr:}{}_{}^{} </math> <br><br />
<br />
#<math>tr0_M=0 \ </math> <br><br />
#<math>tr(A+B)=tr(A)+tr(B) \ </math> <br><br />
#<math> tr(cA)=c.trA \ </math> <br><br />
<br />
<math>A=\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}\ \ B=\begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} \ </math> <br><br />
<math>trA=1\ \ trB=1 \ </math> <br><br />
<br />
<math>Set\ \ W=\big\{A\in V: trA=0\big\}=\bigg\{\begin{pmatrix} 1 & 7 \\ \pi\ & -1\end{pmatrix},...\bigg\} \ </math> <br><br />
<br />
<u>Claim</u><br />
<br />
<math> \mbox{W is a subspace.}{}_{}^{} </math> <br><br />
<br />
<math> \mbox{Indeed,}{}_{}^{} </math> <br><br />
#<math>A,B\in W \Rightarrow trA=0=trB\ \ tr(A+B)=tr(A)+tr(B)=0+0=0\ so\ A+B\in W\ </math> <br />
#<math>A\in W\ \ trA=0\ \ tr(cA)=c(trA)=c.0=0\ so\ cA\in W\ </math><br />
#<math>tr0_M=0\ \ 0_M\in W \ </math><br />
<br />
'''Example 3.'''<br><br />
<br />
<math> W_3=\big\{ A\in M_{n\times n}(F): trA=1\big\} \mbox{ Not a subspace.} {}_{}^{} </math> <br><br />
<math> A,B\in W_3 \Rightarrow tr(A+B)=trA+trB=1+1=2\ so\ A+B\ \not\in W_3\ </math> <br><br />
<br />
'''Theorem'''<br><br />
<math> \mbox{The intersection of two subspaces of the same space is always a subspace.}{}_{}^{}</math><br><br />
<math> \mbox{Assume }W_1\subset V \mbox{ is a subspace of V, } W_2\subset V \mbox{ is a subspace of V, then }W_1\cap W_2=\big\{ x: x\in W_1 \ and\ x\in W_2\big\} \mbox{ is a subspace.}{}_{}^{} </math> <br><br />
<math>\mbox{However, }W_1\cup W_2=\big\{x: x\in W_1\ or\ W_2\big\} \mbox{ is most often not a subspace.} {}_{}^{}</math> <br><br />
<br />
'''Proof'''<br><br />
<br />
1.<math> \mbox{Assume }x,y \in W_1\cap W_2 \mbox{ , that is, } x\in W_1, x\in W_2, y\in W_1, y\in W_2. \ {}_{}^{} </math><br><br />
<math> x+y\in W_1 \ as\ x,y\in W_1 \mbox{ and } W_1 \mbox{ is a subspace}{}_{}^{} </math><br><br />
<math> x+y\in W_2 \ as\ x,y\in W_2 \mbox{ and } W_2 \mbox{ is a subspace}{}_{}^{} </math><br><br />
<math> \mbox{So }x+y \in W_1\cap W_2. \ {}_{}^{} </math><br><br />
<br />
2.<math>\mbox{If} \ x\in W_1\cap W_2\ then\ x\in W_1 \Rightarrow cx\in W_1\ ,\ x\in W_2 \Rightarrow cx\in W_2\ \Rightarrow cx\in W_1\cap W_2. \ {}_{}^{} </math><br><br />
<br />
3.<math>0 \in W_1\ ,\ 0\in W_2 \Rightarrow 0\in W_1\cap W_2. \ </math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday,_September_2806-240/Classnotes For Thursday, September 282006-10-25T02:25:27Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
===Scan of Lecture notes===<br />
<br />
*Image file: week 3 lecture<br />
** note1[[http://i98.photobucket.com/albums/l269/uhoang/1.jpg]]<br />
** note2:[[http://i98.photobucket.com/albums/l269/uhoang/2.jpg]]<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect006.pdf|Week 3 Lecture 2 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-Lecture-28-september.pdf|Week 3 Lecture 2 notes]]<br />
<br />
===Scan of Tutorial notes===<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Tut003.pdf|Week 3 Tutorial notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-tutorial-28-september.pdf|Week 3 Tutorial notes]]<br />
<br />
===Linear Combination===<br />
<br />
<math>\mbox{Definition: Let }(u_i) = (u_1,u_2,\ldots,u_n)\mbox{ be a sequence of vectors in }V</math>. <br />
<br />
<math>\mbox{A sum of the form:}{}_{}^{}</math><br />
<br />
<math> a_i\in F,\sum_{i=1}^n a_i u_i = a_1u_1 + a_2u_2+\ldots+a_nu_n</math><br />
<br />
<math>\mbox{is called a Linear Combination of the }u_i^{ }</math>.<br />
<br />
===Span===<br />
<math>\mbox{span}(u_i^{ }):= \lbrace\mbox{ The set of all possible linear combinations of the } u_i^{ }\rbrace</math><br />
<br />
<math>\mbox{If }\mathcal{S} \subset V\ \mbox{ is any subset, }</math><br />
<br />
<math>\mbox{span}(\mathcal{S}):= \lbrace\mbox{The set of all linear combination of vectors in }\mathcal{S}\rbrace=\left\lbrace\sum_{i=0}^n a_i u_i,\quad a_i \in F, u_i \in \mathcal{S}\right\rbrace</math><br />
<br />
<math>\mbox{span}(\mathcal{S})\mbox{ always contains }0\mbox{ even if }\mathcal{S}=\emptyset</math> <br />
<br />
'''Theorem'''<br />
<br />
<math>\forall\mathcal{S} \subset V\mbox{, span}(\mathcal{S})\mbox{ is a subspace of }V</math><br />
<br />
<math>\mbox{Proof:}{}_{}^{}</math><br />
<br />
1. <math>0 \in\mbox{ span}(\mathcal{S})</math>.<br><br />
2. <math>\mbox{Let }x \in \mbox{ span}(\mathcal{S})\Rightarrow x =\sum_{i=1}^n a_iu_i\mbox{, }u_i\in \mathcal{S}\mbox{, }</math><br />
<br />
<math>\mbox{and let }y \in \mbox{ span}(\mathcal{S})\Rightarrow y =\sum_{i=1}^m b_iv_i\mbox{, }v_i\in \mathcal{S}</math><br />
<br />
<math>x+y = \sum_{i=1}^n a_iu_i+ \sum_{i=1}^m b_iv_i = \sum_{i=1}^{\mbox{max}(m,n)} c_iw_i</math><br />
<br />
<math>\qquad\mbox{ where }c_i=(a_1+b_1,a_2+b_2,\ldots,a_{\mbox{max}(m,n)}+b_{\mbox{max}(m,n)})\mbox{ and }w_i\in\mathcal{S}</math><br />
<br />
3.<math>cx= c\sum_{i=1}^n a_iu_i=\sum_{i=1}^n(ca_i)u_i\in\mbox{ span}(\mathcal{S})</math><br />
<br />
<br />
''Example''<br />
1. <br />
<br />
<math>\mbox{Let } P_3(\mathbb{R})=\lbrace ax^3+bx^2+cx+d\rbrace\subset P(\mathbb{R})\mbox{, where }a, b, c, d \in \mathbb{R}</math>.<br />
<br />
<math>\begin{matrix}u_1^{}&=&x^3-2x^2-5x-3\\<br />
u_2^{}&=&3x^3-5x^2-4x-9\\<br />
v_{}^{}&=&2x^3-2x^2+12x-6\end{matrix}</math><br />
<br />
<math>\mbox{Let }W=\mbox{span}(u_1^{},u_2^{})\mbox{,}</math><br><br />
<br />
<br><math>\mbox{Does/Is } v \in W\mbox{ ?}</math><br />
<br />
<math>v\in W\mbox{ if it is a linear combination of span}(u_1^{},u_2^{})</math><br />
<br />
<math>v=a_1u_1 + a_2u_2 \mbox{ for some }a_1, a_2 \in \mathbb{R}</math><br><br />
<br />
<br><math>\mbox{If }\exists a_1,a_2\in \mathbb{R}</math><br />
<br />
<math>\begin{matrix}2x^3-2x^2+12x-6&=& a_1^{}(x^3-2x^2-5x-3) + a_2^{}(3x^3-5x^2-4x-9)\\<br />
\ &=&(a_1^{}+3a_2^{})x^3 + (-2a_1^{}-5a_2^{})x^2 + (-5a_1^{}-4a_2^{})x + (-3a_1^{}-9a_2^{})\end{matrix}</math><br />
<br />
<math>\mbox{Need to solve}\begin{cases}<br />
2=a_1^{}+3a_2^{}\\<br />
-2=-2a_1^{}-5a_2^{}\\<br />
12=-5a_1^{}-4a_2^{}\\<br />
-6=-3a_1^{}-9a_2^{}\end{cases}</math><br />
<br />
<math>\mbox{Solve the four equations above and we will get }a_1^{}=-4\mbox{ and }a_2^{}=2</math><br />
<br />
<math>\mbox{Check if }a_1^{}=-4\mbox{ and }a_2^{}=2\mbox{ holds for all 4 equations.}</math><br />
<br />
<math>\mbox{Since it holds, } v\in W</math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday_October_306-240/Classnotes For Tuesday October 32006-10-25T02:23:36Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
===Links to Classnotes===<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect007.pdf|Week 4 Lecture 1 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-Lecture-03-October.pdf|Week 4 Lecture 1 notes]]<br />
<br />
----<br />
<math>\mbox{Definition}{}_{}^{}</math> <br />
<br />
<math>v\in V \mbox{ is a linear combination of elements in } S\subset V</math><br />
<math> \mbox{ if }\exists u_1,\ldots,u_n\in S \mbox{ and } a_1,\dots,a_n \in F \mbox{ such that } V=\sum a_i u_i</math><br />
<br />
<math>\mbox{Example}{}_{}^{}</math><br />
<br />
<math>\mbox{In }P_3(\mathbb{R})\mbox{,}</math><br />
<math>v_1^{}=2x^3-2x^2+12-6 \mbox{ is a linear combination of:}</math><br />
<math>u_1^{}=x^3-2x^2-5x-3\mbox{ and }u_2=3x^3-5x^2-4x-9</math><br />
<math>\mbox{but } v_2^{}=3x^3-2x^2+7x+8 \mbox{ is not.}</math><br />
<br />
<math>\mbox{Why?}{}_{}^{}</math><br />
<br />
<math>v_1^{}=2x^3-2x^2+12-6=a_1^{}u_1+a_2u_2</math><br />
<br />
<math>=a_1(x^3-2x^2-5x-3)+a_2(3x^3-5x^2-4x-9){}_{}^{}</math><br />
<br />
<math>v_1^{}=-4u_1+2u_2</math><br />
<br />
<math>\mbox{Definition}{}_{}^{}</math> <br />
<br />
<math>\mbox{We say that a subset }S\subset V\mbox{ generates or spans }V </math> <br />
<br />
<math>\mbox{ if span }S=\lbrace\mbox{ all linear combinations of elements in } S\rbrace=V{}_{}^{}</math><br />
<br />
<math>\mbox{Examples}{}_{}^{}</math> <br />
<br />
<math>V=M_{2\times 2}(\mathbb{R})</math><br />
<br />
<math>M_1=\begin{pmatrix}1&0\\0&0\end{pmatrix},<br />
M_2=\begin{pmatrix}0&1\\0&0\end{pmatrix},<br />
M_3=\begin{pmatrix}0&0\\1&0\end{pmatrix}, <br />
M_4\begin{pmatrix}0&0\\0&1\end{pmatrix}</math><br />
<br />
<math>N_1=\begin{pmatrix}0&1\\1&1\end{pmatrix},<br />
N_2=\begin{pmatrix}1&0\\1&1\end{pmatrix},<br />
N_3=\begin{pmatrix}1&1\\0&1\end{pmatrix}, <br />
N_4\begin{pmatrix}1&1\\1&0\end{pmatrix}</math><br />
<br />
<math>\mbox{Claims}{}_{}^{}</math><br />
<br />
#<math>\lbrace M_1^{},M_2,M_3,M_4\rbrace\mbox{ generates }V</math><br />
#<math>\lbrace N_1^{},N_2,N_3,N_4\rbrace\mbox{ generates }V</math><br />
#<math>\lbrace M_1^{},M_2,M_3\rbrace\mbox{ does not generate }V</math><br />
#<math>\lbrace N_1^{},N_2,N_3\rbrace\mbox{ does not generate }V</math><br />
<br><br />
<math>\mbox{Proof of 1}{}_{}^{}</math><br />
<br />
<math>\mbox{Given any }B=\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}\mbox{ need to find }a_1,a_2,a_3,a_4\mbox{ such that,}</math><br />
<br />
<math>\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}=B=a_1M_1+a_2M_2+a_3M_3+a_4M_4=\begin{pmatrix}a_1&0\\0&0\end{pmatrix}<br />
+\begin{pmatrix}0&a_2\\0&0\end{pmatrix}<br />
+\begin{pmatrix}0&0\\a_3&0\end{pmatrix}<br />
+\begin{pmatrix}0&0\\0&a_4\end{pmatrix}</math><br />
<br />
<math>=\begin{pmatrix}a_1^{}&a_2\\a_3&a_4\end{pmatrix}\Leftrightarrow<br />
\begin{cases}b_{11}=a_1\\b_{12}=a_2\\b_{21}=a_3\\b_{22}=a_4\end{cases}</math><br />
<math>\mbox{A system of 4 equations with 4 unknowns}{}_{}^{}</math><br />
<br><br />
<math>\mbox{Proof of 2}{}_{}^{}</math><br />
<br />
<math>\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}<br />
=B=a_1N_1+a_2N_2+a_3N_3+a_4N_4=<br />
\begin{pmatrix}0&a_1\\a_1&a_1\end{pmatrix}<br />
+\begin{pmatrix}a_2&0\\a_2&a_2\end{pmatrix}<br />
+\begin{pmatrix}a_3&a_3\\0&a_3\end{pmatrix}<br />
+\begin{pmatrix}a_4&a_4\\a_4&0\end{pmatrix}</math><br />
<br />
<math>=\begin{pmatrix}a_2^{}+a_3+a_4&a_1+a_3+a_4\\a_1+a_2+a_4&a_1+a_2+a_3\end{pmatrix}\Leftrightarrow<br />
\begin{cases}b_{11}=a_2+a_3+a_4\\b_{12}=a_1+a_3+a_4\\b_{21}=a_1+a_2+a_4\\b_{22}=a_1+a_2+a_3\end{cases}</math><br />
<br />
<math>\mbox{Trick}{}_{}^{}</math><br />
<br />
<math>M_1=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_1</math><br />
<math>M_2=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_2</math><br />
<math>M_3=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_3</math><br />
<math>M_4=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_4</math><br />
<br />
<math>B=b_{11}^{}M_1+b_{12}M_3+b_{21}M_3+b_{22}M_4</math><br />
<math>=b_{11}\left(\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_1\right)+\ldots</math><br />
<br />
<math>=\mbox{ a linear combination of }N_1^{},N_2,N_3,N_4</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof of 3}{}_{}^{}</math><br />
<br />
<math>\mbox{Indeed in }a_1^{}M_1+a_2M_2+a_3M_3=<br />
\begin{pmatrix}a_1&a_2\\a_3&0\end{pmatrix}\mbox{ lower right corner is always } 0<br />
</math><br />
<br />
<math>\mbox{for example }\begin{pmatrix}240&157\\e&\pi\end{pmatrix}\mbox{ not in span.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof of 4}{}_{}^{}</math><br />
<br />
<math>a_1^{}N_1+a_2N_2+a_3N_3=\begin{pmatrix}a_2+a_3&a_1+a_3\\a_1+a_2&a_1+a_2+a_3\end{pmatrix}</math><br />
<br />
<math>\begin{pmatrix}240&157\\e&\pi\end{pmatrix}\mbox{ is equal? }<br />
\begin{cases}240=a_2+a_3\\157=a_1+a_3\\e=a_1+a_2\\\pi=a_1+a_2+a_3\end{cases}\Rightarrow\mbox{No solution}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Motivation}{}_{}^{}</math><br />
<br />
<math>S\subset V\mbox{ is linearly dependent if it is wasteful,}</math><br />
<math>\mbox{i.e. if }\exists v\in V\mbox{ such that }\exists a_1^{}\ldots a_n\in F \mbox{ and }u_1^{}\ldots u_2\in S</math><br />
<math>\mbox{ and }\exists b_1^{}\ldots b_m\in F \mbox{ and }w_1\ldots w_m\in S</math><br />
<br />
<math>\mbox{so that }\sum_{i=1}^na_iu_i=v=\sum_{i=1}^mb_iw_i</math><br />
<br />
<math>\sum a_iu_i-\sum b_iw_i=0</math><br />
<br />
<math>\mbox{can be represented as }\sum c_iz_i=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Definition}{}_{}^{}</math><br />
<br />
<math>S\subset V\mbox{ is called linearly dependent if you can find }</math><br />
<math>z_1^{}\ldots z_n\in S\mbox{ different from each other and }c_1^{}\ldots c_n\in F\mbox{ so that not all of which are } 0,</math><br />
<math>\mbox{so that }\sum c_iz_i=0 <br />
\mbox{ otherwise, }S\mbox{ is called linearly independent}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Example 1}{}_{}^{}</math><br />
<br />
<math>\mbox{In }\mathbb{R}, S=<br />
\lbrace\begin{pmatrix}1&2&3\end{pmatrix},<br />
\begin{pmatrix}4&5&6\end{pmatrix},<br />
\begin{pmatrix}7&8&9\end{pmatrix}\rbrace\mbox{ is linearly dependent}</math><br />
<br />
<math>1\cdot\begin{pmatrix}1&2&3\end{pmatrix}-<br />
2\cdot\begin{pmatrix}4&5&6\end{pmatrix}+<br />
1\cdot\begin{pmatrix}7&8&9\end{pmatrix}=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Example 2}{}_{}^{}</math><br />
<br />
<math>\mathbb{R}^n, e_i=\begin{pmatrix}0\\\vdots\\1\\\vdots\\0\end{pmatrix}i^{th}\mbox{ row}</math><br />
<br />
<math>S=\lbrace e_1^{},\ldots,e_n\rbrace</math><br />
<br />
<math>\mbox{Claim }S\mbox{ is linearly independent}{}_{}^{}</math><br />
<br />
<math>\begin{pmatrix}0\\\vdots\\0\end{pmatrix}=0<br />
=\sum_{i=1}^na_ie_i<br />
=\begin{pmatrix}a_1\\a_2\\\vdots\\a_n\end{pmatrix}\Rightarrow<br />
\begin{matrix}a_1=0\\a_2=0\\\vdots\\a_n=0\end{matrix}</math><br />
<br />
<math>\mbox{not not all }a_i^{}\mbox{ are }0\Rightarrow \mbox{ not linearly dependent.}</math><br />
<br />
<math>\mbox{Claim }S\subset V\mbox{ is linearly independent iff whenever }\sum a_iu_i=0<br />
\mbox{ and distinct }u_i\in S\mbox{ then }\forall i\quad a_i=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Comments}{}_{}^{}</math><br />
<br />
#<math>\emptyset\subset V\mbox{ is linearly independent}</math><br />
#<math>\mbox{Suppose }u\in V,\quad \lbrace u\rbrace\mbox{ the singleton set is linearly independent iff }u_{}^{}\neq 0</math><br />
<br />
<math>\lbrace0\rbrace\mbox{ is linearly dependent. example }7\cdot0=0</math><br />
<br />
<math>\mbox{if }u\neq0\mbox{ assume }a\cdot u=0\mbox{, and }a\neq0\Rightarrow a_{}^{-1}au=0<br />
\Rightarrow u=0\mbox{ contradiction results, so no such }a\mbox{ exists.}</math><br />
<math>\mbox{ So}{}_{}^{}\lbrace u\rbrace\mbox{is not linearly dependent, hence it is linearly independent.}</math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday_October_306-240/Classnotes For Tuesday October 32006-10-25T02:10:48Z<p>Wongpak: /* Links to Classnotes */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
===Links to Classnotes===<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect007.pdf|Week 4 Lecture 1 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-Lecture-03-October.pdf|Week 4 Lecture 1 notes]]<br />
<br />
----<br />
<math>\mbox{Definition}{}_{}^{}</math> <br />
<br />
<math>v\in V \mbox{ is a linear combination of elements in } S\subset V</math><br />
<math> \mbox{ if }\exists u_1,\ldots,u_n\in S \mbox{ and } a_1,\dots,a_n \in F \mbox{ such that } V=\sum a_i u_i</math><br />
<br />
<math>\mbox{Example}{}_{}^{}</math><br />
<br />
<math>\mbox{In }P_3(\mathbb{R})\mbox{,}</math><br />
<math>v_1^{}=2x^3-2x^2+12-6 \mbox{ is a linear combination of:}</math><br />
<math>u_1^{}=x^3-2x^2-5x-3\mbox{ and }u_2=3x^3-5x^2-4x-9</math><br />
<math>\mbox{but } v_2^{}=3x^3-2x^2+7x+8 \mbox{ is not.}</math><br />
<br />
<math>\mbox{Why?}{}_{}^{}</math><br />
<br />
<math>v_1^{}=2x^3-2x^2+12-6=a_1^{}u_1+a_2u_2=a_1(x^3-2x^2-5x-3)+a_2(3x^3-5x^2-4x-9)</math><br />
<math>v_1^{}=-4u_1+2u_2</math><br />
<br />
<math>\mbox{Definition}{}_{}^{}</math> <br />
<br />
<math>\mbox{We say that a subset }S\subset V\mbox{ generates or spans }V \mbox{ if span }S=\lbrace\mbox{ all linear combinations of elements in } S\rbrace=V</math><br />
<br />
<math>\mbox{Examples}{}_{}^{}</math> <br />
<br />
<math>V=M_{2\times 2}(\mathbb{R})</math><br />
<br />
<math>M_1=\begin{pmatrix}1&0\\0&0\end{pmatrix},<br />
M_2=\begin{pmatrix}0&1\\0&0\end{pmatrix},<br />
M_3=\begin{pmatrix}0&0\\1&0\end{pmatrix}, <br />
M_4\begin{pmatrix}0&0\\0&1\end{pmatrix}</math><br />
<br />
<math>N_1=\begin{pmatrix}0&1\\1&1\end{pmatrix},<br />
N_2=\begin{pmatrix}1&0\\1&1\end{pmatrix},<br />
N_3=\begin{pmatrix}1&1\\0&1\end{pmatrix}, <br />
N_4\begin{pmatrix}1&1\\1&0\end{pmatrix}</math><br />
<br />
<math>\mbox{Claims}{}_{}^{}</math><br />
<br />
#<math>\lbrace M_1^{},M_2,M_3,M_4\rbrace\mbox{ generates }V</math><br />
#<math>\lbrace N_1^{},N_2,N_3,N_4\rbrace\mbox{ generates }V</math><br />
#<math>\lbrace M_1^{},M_2,M_3\rbrace\mbox{ does not generate }V</math><br />
#<math>\lbrace N_1^{},N_2,N_3\rbrace\mbox{ does not generate }V</math><br />
<br><br />
<math>\mbox{Proof of 1}{}_{}^{}</math><br />
<br />
<math>\mbox{Given any }B=\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}\mbox{ need to find }a_1,a_2,a_3,a_4\mbox{ such that,}</math><br />
<br />
<math>\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}=B=a_1M_1+a_2M_2+a_3M_3+a_4M_4=\begin{pmatrix}a_1&0\\0&0\end{pmatrix}<br />
+\begin{pmatrix}0&a_2\\0&0\end{pmatrix}<br />
+\begin{pmatrix}0&0\\a_3&0\end{pmatrix}<br />
+\begin{pmatrix}0&0\\0&a_4\end{pmatrix}</math><br />
<br />
<math>=\begin{pmatrix}a_1^{}&a_2\\a_3&a_4\end{pmatrix}\Leftrightarrow<br />
\begin{cases}b_{11}=a_1\\b_{12}=a_2\\b_{21}=a_3\\b_{22}=a_4\end{cases}</math><br />
<math>\mbox{A system of 4 equations with 4 unknowns}{}_{}^{}</math><br />
<br><br />
<math>\mbox{Proof of 2}{}_{}^{}</math><br />
<br />
<math>\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}<br />
=B=a_1N_1+a_2N_2+a_3N_3+a_4N_4=<br />
\begin{pmatrix}0&a_1\\a_1&a_1\end{pmatrix}<br />
+\begin{pmatrix}a_2&0\\a_2&a_2\end{pmatrix}<br />
+\begin{pmatrix}a_3&a_3\\0&a_3\end{pmatrix}<br />
+\begin{pmatrix}a_4&a_4\\a_4&0\end{pmatrix}</math><br />
<br />
<math>=\begin{pmatrix}a_2^{}+a_3+a_4&a_1+a_3+a_4\\a_1+a_2+a_4&a_1+a_2+a_3\end{pmatrix}\Leftrightarrow<br />
\begin{cases}b_{11}=a_2+a_3+a_4\\b_{12}=a_1+a_3+a_4\\b_{21}=a_1+a_2+a_4\\b_{22}=a_1+a_2+a_3\end{cases}</math><br />
<br />
<math>\mbox{Trick}{}_{}^{}</math><br />
<br />
<math>M_1=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_1</math><br />
<math>M_2=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_2</math><br />
<math>M_3=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_3</math><br />
<math>M_4=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_4</math><br />
<br />
<math>B=b_{11}^{}M_1+b_{12}M_3+b_{21}M_3+b_{22}M_4</math><br />
<math>=b_{11}\left(\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_1\right)+\ldots</math><br />
<br />
<math>=\mbox{ a linear combination of }N_1^{},N_2,N_3,N_4</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof of 3}{}_{}^{}</math><br />
<br />
<math>\mbox{Indeed in }a_1^{}M_1+a_2M_2+a_3M_3=<br />
\begin{pmatrix}a_1&a_2\\a_3&0\end{pmatrix}\mbox{ lower right corner is always } 0<br />
</math><br />
<br />
<math>\mbox{for example }\begin{pmatrix}240&157\\e&\pi\end{pmatrix}\mbox{ not in span.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof of 4}{}_{}^{}</math><br />
<br />
<math>a_1^{}N_1+a_2N_2+a_3N_3=\begin{pmatrix}a_2+a_3&a_1+a_3\\a_1+a_2&a_1+a_2+a_3\end{pmatrix}</math><br />
<br />
<math>\begin{pmatrix}240&157\\e&\pi\end{pmatrix}\mbox{ is equal? }<br />
\begin{cases}240=a_2+a_3\\157=a_1+a_3\\e=a_1+a_2\\\pi=a_1+a_2+a_3\end{cases}\Rightarrow\mbox{No solution}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Motivation}{}_{}^{}</math><br />
<br />
<math>S\subset V\mbox{ is linearly dependent if it is wasteful,}</math><br />
<math>\mbox{i.e. if }\exists v\in V\mbox{ such that }\exists a_1^{}\ldots a_n\in F \mbox{ and }u_1^{}\ldots u_2\in S</math><br />
<math>\mbox{ and }\exists b_1^{}\ldots b_m\in F \mbox{ and }w_1\ldots w_m\in S</math><br />
<br />
<math>\mbox{so that }\sum_{i=1}^na_iu_i=v=\sum_{i=1}^mb_iw_i</math><br />
<br />
<math>\sum a_iu_i-\sum b_iw_i=0</math><br />
<br />
<math>\mbox{can be represented as }\sum c_iz_i=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Definition}{}_{}^{}</math><br />
<br />
<math>S\subset V\mbox{ is called linearly dependent if you can find }</math><br />
<math>z_1^{}\ldots z_n\in S\mbox{ different from each other and }c_1^{}\ldots c_n\in F\mbox{ so that not all of which are } 0,</math><br />
<math>\mbox{so that }\sum c_iz_i=0 <br />
\mbox{ otherwise, }S\mbox{ is called linearly independent}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Example 1}{}_{}^{}</math><br />
<br />
<math>\mbox{In }\mathbb{R}, S=<br />
\lbrace\begin{pmatrix}1&2&3\end{pmatrix},<br />
\begin{pmatrix}4&5&6\end{pmatrix},<br />
\begin{pmatrix}7&8&9\end{pmatrix}\rbrace\mbox{ is linearly dependent}</math><br />
<br />
<math>1\cdot\begin{pmatrix}1&2&3\end{pmatrix}-<br />
2\cdot\begin{pmatrix}4&5&6\end{pmatrix}+<br />
1\cdot\begin{pmatrix}7&8&9\end{pmatrix}=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Example 2}{}_{}^{}</math><br />
<br />
<math>\mathbb{R}^n, e_i=\begin{pmatrix}0\\\vdots\\1\\\vdots\\0\end{pmatrix}i^{th}\mbox{ row}</math><br />
<br />
<math>S=\lbrace e_1^{},\ldots,e_n\rbrace</math><br />
<br />
<math>\mbox{Claim }S\mbox{ is linearly independent}{}_{}^{}</math><br />
<br />
<math>\begin{pmatrix}0\\\vdots\\0\end{pmatrix}=0<br />
=\sum_{i=1}^na_ie_i<br />
=\begin{pmatrix}a_1\\a_2\\\vdots\\a_n\end{pmatrix}\Rightarrow<br />
\begin{matrix}a_1=0\\a_2=0\\\vdots\\a_n=0\end{matrix}</math><br />
<br />
<math>\mbox{not not all }a_i^{}\mbox{ are }0\Rightarrow \mbox{ not linearly dependent.}</math><br />
<br />
<math>\mbox{Claim }S\subset V\mbox{ is linearly independent iff whenever }\sum a_iu_i=0<br />
\mbox{ and distinct }u_i\in S\mbox{ then }\forall i\quad a_i=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Comments}{}_{}^{}</math><br />
<br />
#<math>\emptyset\subset V\mbox{ is linearly independent}</math><br />
#<math>\mbox{Suppose }u\in V,\quad \lbrace u\rbrace\mbox{ the singleton set is linearly independent iff }u_{}^{}\neq 0</math><br />
<br />
<math>\lbrace0\rbrace\mbox{ is linearly dependent. example }7\cdot0=0</math><br />
<br />
<math>\mbox{if }u\neq0\mbox{ assume }a\cdot u=0\mbox{, and }a\neq0\Rightarrow a_{}^{-1}au=0<br />
\Rightarrow u=0\mbox{ contradiction results, so no such }a\mbox{ exists.}</math><br />
<math>\mbox{ So}{}_{}^{}\lbrace u\rbrace\mbox{is not linearly dependent, hence it is linearly independent.}</math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday_October_306-240/Classnotes For Tuesday October 32006-10-25T02:09:59Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
===Links to Classnotes===<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect007.pdf|Week 4 Lecture 1 notes]]<br />
----<br />
<math>\mbox{Definition}{}_{}^{}</math> <br />
<br />
<math>v\in V \mbox{ is a linear combination of elements in } S\subset V</math><br />
<math> \mbox{ if }\exists u_1,\ldots,u_n\in S \mbox{ and } a_1,\dots,a_n \in F \mbox{ such that } V=\sum a_i u_i</math><br />
<br />
<math>\mbox{Example}{}_{}^{}</math><br />
<br />
<math>\mbox{In }P_3(\mathbb{R})\mbox{,}</math><br />
<math>v_1^{}=2x^3-2x^2+12-6 \mbox{ is a linear combination of:}</math><br />
<math>u_1^{}=x^3-2x^2-5x-3\mbox{ and }u_2=3x^3-5x^2-4x-9</math><br />
<math>\mbox{but } v_2^{}=3x^3-2x^2+7x+8 \mbox{ is not.}</math><br />
<br />
<math>\mbox{Why?}{}_{}^{}</math><br />
<br />
<math>v_1^{}=2x^3-2x^2+12-6=a_1^{}u_1+a_2u_2=a_1(x^3-2x^2-5x-3)+a_2(3x^3-5x^2-4x-9)</math><br />
<math>v_1^{}=-4u_1+2u_2</math><br />
<br />
<math>\mbox{Definition}{}_{}^{}</math> <br />
<br />
<math>\mbox{We say that a subset }S\subset V\mbox{ generates or spans }V \mbox{ if span }S=\lbrace\mbox{ all linear combinations of elements in } S\rbrace=V</math><br />
<br />
<math>\mbox{Examples}{}_{}^{}</math> <br />
<br />
<math>V=M_{2\times 2}(\mathbb{R})</math><br />
<br />
<math>M_1=\begin{pmatrix}1&0\\0&0\end{pmatrix},<br />
M_2=\begin{pmatrix}0&1\\0&0\end{pmatrix},<br />
M_3=\begin{pmatrix}0&0\\1&0\end{pmatrix}, <br />
M_4\begin{pmatrix}0&0\\0&1\end{pmatrix}</math><br />
<br />
<math>N_1=\begin{pmatrix}0&1\\1&1\end{pmatrix},<br />
N_2=\begin{pmatrix}1&0\\1&1\end{pmatrix},<br />
N_3=\begin{pmatrix}1&1\\0&1\end{pmatrix}, <br />
N_4\begin{pmatrix}1&1\\1&0\end{pmatrix}</math><br />
<br />
<math>\mbox{Claims}{}_{}^{}</math><br />
<br />
#<math>\lbrace M_1^{},M_2,M_3,M_4\rbrace\mbox{ generates }V</math><br />
#<math>\lbrace N_1^{},N_2,N_3,N_4\rbrace\mbox{ generates }V</math><br />
#<math>\lbrace M_1^{},M_2,M_3\rbrace\mbox{ does not generate }V</math><br />
#<math>\lbrace N_1^{},N_2,N_3\rbrace\mbox{ does not generate }V</math><br />
<br><br />
<math>\mbox{Proof of 1}{}_{}^{}</math><br />
<br />
<math>\mbox{Given any }B=\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}\mbox{ need to find }a_1,a_2,a_3,a_4\mbox{ such that,}</math><br />
<br />
<math>\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}=B=a_1M_1+a_2M_2+a_3M_3+a_4M_4=\begin{pmatrix}a_1&0\\0&0\end{pmatrix}<br />
+\begin{pmatrix}0&a_2\\0&0\end{pmatrix}<br />
+\begin{pmatrix}0&0\\a_3&0\end{pmatrix}<br />
+\begin{pmatrix}0&0\\0&a_4\end{pmatrix}</math><br />
<br />
<math>=\begin{pmatrix}a_1^{}&a_2\\a_3&a_4\end{pmatrix}\Leftrightarrow<br />
\begin{cases}b_{11}=a_1\\b_{12}=a_2\\b_{21}=a_3\\b_{22}=a_4\end{cases}</math><br />
<math>\mbox{A system of 4 equations with 4 unknowns}{}_{}^{}</math><br />
<br><br />
<math>\mbox{Proof of 2}{}_{}^{}</math><br />
<br />
<math>\begin{pmatrix}b_{11}^{}&b_{12}\\b_{21}&b_{22}\end{pmatrix}<br />
=B=a_1N_1+a_2N_2+a_3N_3+a_4N_4=<br />
\begin{pmatrix}0&a_1\\a_1&a_1\end{pmatrix}<br />
+\begin{pmatrix}a_2&0\\a_2&a_2\end{pmatrix}<br />
+\begin{pmatrix}a_3&a_3\\0&a_3\end{pmatrix}<br />
+\begin{pmatrix}a_4&a_4\\a_4&0\end{pmatrix}</math><br />
<br />
<math>=\begin{pmatrix}a_2^{}+a_3+a_4&a_1+a_3+a_4\\a_1+a_2+a_4&a_1+a_2+a_3\end{pmatrix}\Leftrightarrow<br />
\begin{cases}b_{11}=a_2+a_3+a_4\\b_{12}=a_1+a_3+a_4\\b_{21}=a_1+a_2+a_4\\b_{22}=a_1+a_2+a_3\end{cases}</math><br />
<br />
<math>\mbox{Trick}{}_{}^{}</math><br />
<br />
<math>M_1=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_1</math><br />
<math>M_2=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_2</math><br />
<math>M_3=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_3</math><br />
<math>M_4=\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_4</math><br />
<br />
<math>B=b_{11}^{}M_1+b_{12}M_3+b_{21}M_3+b_{22}M_4</math><br />
<math>=b_{11}\left(\frac{1}{3}\left(N_1+N_2+N_3+N_4\right)-3N_1\right)+\ldots</math><br />
<br />
<math>=\mbox{ a linear combination of }N_1^{},N_2,N_3,N_4</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof of 3}{}_{}^{}</math><br />
<br />
<math>\mbox{Indeed in }a_1^{}M_1+a_2M_2+a_3M_3=<br />
\begin{pmatrix}a_1&a_2\\a_3&0\end{pmatrix}\mbox{ lower right corner is always } 0<br />
</math><br />
<br />
<math>\mbox{for example }\begin{pmatrix}240&157\\e&\pi\end{pmatrix}\mbox{ not in span.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof of 4}{}_{}^{}</math><br />
<br />
<math>a_1^{}N_1+a_2N_2+a_3N_3=\begin{pmatrix}a_2+a_3&a_1+a_3\\a_1+a_2&a_1+a_2+a_3\end{pmatrix}</math><br />
<br />
<math>\begin{pmatrix}240&157\\e&\pi\end{pmatrix}\mbox{ is equal? }<br />
\begin{cases}240=a_2+a_3\\157=a_1+a_3\\e=a_1+a_2\\\pi=a_1+a_2+a_3\end{cases}\Rightarrow\mbox{No solution}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Motivation}{}_{}^{}</math><br />
<br />
<math>S\subset V\mbox{ is linearly dependent if it is wasteful,}</math><br />
<math>\mbox{i.e. if }\exists v\in V\mbox{ such that }\exists a_1^{}\ldots a_n\in F \mbox{ and }u_1^{}\ldots u_2\in S</math><br />
<math>\mbox{ and }\exists b_1^{}\ldots b_m\in F \mbox{ and }w_1\ldots w_m\in S</math><br />
<br />
<math>\mbox{so that }\sum_{i=1}^na_iu_i=v=\sum_{i=1}^mb_iw_i</math><br />
<br />
<math>\sum a_iu_i-\sum b_iw_i=0</math><br />
<br />
<math>\mbox{can be represented as }\sum c_iz_i=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Definition}{}_{}^{}</math><br />
<br />
<math>S\subset V\mbox{ is called linearly dependent if you can find }</math><br />
<math>z_1^{}\ldots z_n\in S\mbox{ different from each other and }c_1^{}\ldots c_n\in F\mbox{ so that not all of which are } 0,</math><br />
<math>\mbox{so that }\sum c_iz_i=0 <br />
\mbox{ otherwise, }S\mbox{ is called linearly independent}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Example 1}{}_{}^{}</math><br />
<br />
<math>\mbox{In }\mathbb{R}, S=<br />
\lbrace\begin{pmatrix}1&2&3\end{pmatrix},<br />
\begin{pmatrix}4&5&6\end{pmatrix},<br />
\begin{pmatrix}7&8&9\end{pmatrix}\rbrace\mbox{ is linearly dependent}</math><br />
<br />
<math>1\cdot\begin{pmatrix}1&2&3\end{pmatrix}-<br />
2\cdot\begin{pmatrix}4&5&6\end{pmatrix}+<br />
1\cdot\begin{pmatrix}7&8&9\end{pmatrix}=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Example 2}{}_{}^{}</math><br />
<br />
<math>\mathbb{R}^n, e_i=\begin{pmatrix}0\\\vdots\\1\\\vdots\\0\end{pmatrix}i^{th}\mbox{ row}</math><br />
<br />
<math>S=\lbrace e_1^{},\ldots,e_n\rbrace</math><br />
<br />
<math>\mbox{Claim }S\mbox{ is linearly independent}{}_{}^{}</math><br />
<br />
<math>\begin{pmatrix}0\\\vdots\\0\end{pmatrix}=0<br />
=\sum_{i=1}^na_ie_i<br />
=\begin{pmatrix}a_1\\a_2\\\vdots\\a_n\end{pmatrix}\Rightarrow<br />
\begin{matrix}a_1=0\\a_2=0\\\vdots\\a_n=0\end{matrix}</math><br />
<br />
<math>\mbox{not not all }a_i^{}\mbox{ are }0\Rightarrow \mbox{ not linearly dependent.}</math><br />
<br />
<math>\mbox{Claim }S\subset V\mbox{ is linearly independent iff whenever }\sum a_iu_i=0<br />
\mbox{ and distinct }u_i\in S\mbox{ then }\forall i\quad a_i=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Comments}{}_{}^{}</math><br />
<br />
#<math>\emptyset\subset V\mbox{ is linearly independent}</math><br />
#<math>\mbox{Suppose }u\in V,\quad \lbrace u\rbrace\mbox{ the singleton set is linearly independent iff }u_{}^{}\neq 0</math><br />
<br />
<math>\lbrace0\rbrace\mbox{ is linearly dependent. example }7\cdot0=0</math><br />
<br />
<math>\mbox{if }u\neq0\mbox{ assume }a\cdot u=0\mbox{, and }a\neq0\Rightarrow a_{}^{-1}au=0<br />
\Rightarrow u=0\mbox{ contradiction results, so no such }a\mbox{ exists.}</math><br />
<math>\mbox{ So}{}_{}^{}\lbrace u\rbrace\mbox{is not linearly dependent, hence it is linearly independent.}</math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday_October_506-240/Classnotes For Thursday October 52006-10-25T02:09:38Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
===Links to Classnotes===<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect008.pdf|Week 4 Lecture 2 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-lecture-05-october.pdf|Week 4 Lecture 2 notes]]<br />
<br />
===Scan of Tutorial notes===<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Tut004.pdf|Week 4 Tutorial notes]]<br />
<br />
----<math>\mbox{From last class}{}_{}^{}</math><br />
<br />
<math>M_1=\begin{pmatrix}1&0\\0&0\end{pmatrix},<br />
M_2=\begin{pmatrix}0&1\\0&0\end{pmatrix},<br />
M_3=\begin{pmatrix}0&0\\1&0\end{pmatrix}, <br />
M_4\begin{pmatrix}0&0\\0&1\end{pmatrix}</math><br />
<br />
<math>N_1=\begin{pmatrix}0&1\\1&1\end{pmatrix},<br />
N_2=\begin{pmatrix}1&0\\1&1\end{pmatrix},<br />
N_3=\begin{pmatrix}1&1\\0&1\end{pmatrix}, <br />
N_4\begin{pmatrix}1&1\\1&0\end{pmatrix}</math><br />
<br />
<math>\mbox{The }M_i\mbox{s generate }M_{2\times 2}</math><br />
<br />
<math>\mbox{Fact }T\subset\mbox{ span }S\Rightarrow \mbox{ span }T\subset\mbox{ span }S </math><br />
<br />
<math>S\subset V\mbox{ is linearly independent }\Leftrightarrow \mbox{ whenever }u_i\in S\mbox{ are distinct}</math><br />
<br />
<math>\sum a_iu_i=0\Rightarrow V_ia_i=0 \mbox{ waste not}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Comments}{}_{}^{}</math><br />
#<math>\emptyset\subset V\mbox{ is linearly independent}</math><br />
#<math>\lbrace u\rbrace\mbox{ is linearly independent iff }u_{}^{}\neq 0</math><br />
#<math>\mbox{If }S_1^{}\subset S_2\subset V</math><br />
##<math>\mbox{If }S_1^{}\mbox{ is linearly dependent, so is }S_2</math> <br />
##<math>\mbox{If }S_2^{}\mbox{ is linearly dependent, so is }S_1</math><br />
##<math>\mbox{If }S_1^{}\mbox{ generates }V\mbox{, so does }S_2</math><br />
##<math>\mbox{If }S_2^{}\mbox{ does not generate }V\mbox{ neither does }S_1</math><br />
#<math>\mbox{If }S_{}^{}\mbox{ is linearly independent in }V\mbox{ and }v\notin S\mbox{ then }S\cup\lbrace u\rbrace\mbox{ is linearly independent.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof}{}_{}^{}</math><br />
<br />
<math>\mbox{1.}\Leftarrow:\mbox{ start from second assertion and deduce first.}</math><br />
<br />
<math>\mbox{Assume }v_{}^{}\in \mbox{span }S</math><br />
<math>v=\sum a_iu_i\mbox{ where }u_i\in S, a_i\in F</math><br />
<br />
<math>\sum a_iu_i-1\cdot v=0\mbox{ this is a linear combination of elements in }S\cup v</math><br />
<math>\mbox{ in which not all coefficients are }0 \mbox{ and which add to }0_{}^{}.</math><br />
<math>\mbox{So }S\cup \lbrace v\rbrace\mbox{ is linearly dependent by definition}</math><br />
<br><br />
<math>\mbox{2.}:\Rightarrow\mbox{ Assume }S\cup \lbrace v\rbrace\mbox{ is linearly dependent }\Rightarrow\mbox{ a linear combination can be found, of the form:}</math><br />
<br />
<math>(*)\qquad\sum a_iu_i+bv=0\mbox{ where }u_i\in S\mbox{ and not all of the }a_i \mbox{ and }b \mbox{ are }0</math><br />
<br />
<math>\mbox{If }b=0\mbox{, then }\sum a_iu_i=0\mbox{ and not }a_i\mbox{s are }0 </math><br />
<math>{}_{}^{}\Rightarrow S \mbox{ is linearly dependent}</math><br />
<math>{}_{}^{}\mbox{but initial assumption was }S\mbox{ is linearly independent.}\Rightarrow \mbox{ contradiction so }b\neq0</math><br />
<math>\mbox{So divide by }b\mbox{: (*) becomes }\sum\frac{a_i}{b}u_i + v = 0\Rightarrow v=-\sum\frac{a_i}{b}u_i\Rightarrow v\in \mbox{ span }S</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Definition}{}_{}^{}</math><br />
<br />
<math>{}_{}^{}\mbox{A basis of a vector space }V\mbox{ is a subset }\beta\subset V</math><br />
<math>{}_{}^{}\mbox{such that}</math><br />
#<math>{}_{}^{}\beta\mbox{ generates }V\mbox{ or }V=\mbox{ span }\beta</math><br />
#<math>{}_{}^{}\beta\mbox{ is linearly independent.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Examples}{}_{}^{}</math><br />
<br />
<math>1. \beta=\emptyset{}_{}^{}\mbox{ is a basis of }\lbrace0\rbrace</math><br />
<br />
<math>2. {}_{}^{}V\mbox{ be }\mathbb{R}\mbox{ as a vector space over }\mathbb{R}</math><br />
<math>\qquad{}_{}^{}\beta=\lbrace5\rbrace\mbox{ and }\beta=\lbrace1\rbrace\mbox{ are bases.}</math><br />
<br />
<math>3.{}_{}^{}\mbox{ Let }V\mbox{ be }\mathbb{C}\mbox{ as a vector space over }\mathbb{R} \quad\beta=\lbrace1,i\rbrace</math><br />
<br />
:<math>\qquad{}_{}^{}\mbox{Check}</math><br />
<br />
:<math>\qquad{}_{}^{}\mbox{1. Every complex number is a linear combination of }\beta.</math><br />
::<math>Z=a+bi=a\cdot 1+b\cdot i\mbox{ with coefficients in }\mathbb{R}\mbox{ so }\lbrace1,i\rbrace\mbox{ generates}</math><br />
<br />
:<math>\qquad{}_{}^{}\mbox{2. Show }\beta=\lbrace1,i\rbrace\mbox{ are linearly independent. Assume }a\cdot 1+b\cdot i=0\mbox{ where }a,b\in\mathbb{R}</math><br />
::<math>{}_{}^{}\Rightarrow a+bi=0\Rightarrow a=0\mbox{ and } b=0</math><br />
<br />
<math>{}_{}^{}\mbox{4. }V\in\mathbb{R}^n=<br />
\left\lbrace\begin{pmatrix}\vdots\end{pmatrix}y,\qquad<br />
e_1=\begin{pmatrix}1\\0\\\vdots\\0\end{pmatrix},<br />
e_2=\begin{pmatrix}0\\1\\\vdots\\0\end{pmatrix},\ldots,<br />
e_n=\begin{pmatrix}0\\0\\\vdots\\1\end{pmatrix}\right\rbrace</math><br />
<br />
:<math>{}_{}^{}e_1\ldots e_n\mbox{ are a basis of }V</math><br />
::<math>{}_{}^{}\mbox{They span }\begin{pmatrix}a_1\\\vdots\\a_n\end{pmatrix}=\sum a_ie_i</math><br />
::<math>{}_{}^{}\mbox{They are linearly independent. }\sum a_ie_i=0\Rightarrow \sum a_ie_i=<br />
\begin{pmatrix}a_1\\\vdots\\a_n\end{pmatrix}=0\Rightarrow a_i=0 \quad\forall i</math><br />
<br />
<math>{}_{}^{}\mbox{5. In }V=P_3(\mathbb{R}),\qquad \beta=\lbrace 1,x,x^2,x^3\rbrace</math><br />
<br />
<math>{}_{}^{}\mbox{6. In }V=P_1(\mathbb{R})=\lbrace ax+b\rbrace,\qquad \beta=\lbrace 1+x,1-x\rbrace\mbox{ is a basis}</math><br />
:<math>{}_{}^{}\mbox{1. Generate }</math><br />
::<math>u_1+u_2=2\Rightarrow \frac{1}{2}(u_1+u_2)=1\mbox{ so }1 \in\mbox{ span }S</math><br />
::<math>u_1-u_2=2x\Rightarrow \frac{1}{2}(u_1-u_2)=x\mbox{ so }x \in\mbox{ span }S</math><br />
::<math>{}_{}^{}\mbox{ so span}\lbrace 1,x\rbrace \subset\mbox{ span }\beta</math><br />
:<math>{}_{}^{}\mbox{2. Linearly independent. Assume }au_1+bu_2=0</math><br />
::<math>\Rightarrow a(1+x)+b(1-x)=0\Rightarrow a+b+(a-b)x=0</math><br />
::<math>{}_{}^{}\Rightarrow a+b=0\mbox{ and }a-b=0</math><br />
::<math>(a+b)+(a-b)\Rightarrow 2a=0\Rightarrow a=0</math><br />
::<math>(a+b)-(a-b)\Rightarrow 2b=0\Rightarrow b=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Theorem}{}_{}^{}</math><br />
<br />
<math>{}_{}^{}\mbox{A subset }\beta\mbox{ of a vectorspace }V \mbox{ is a basis iff every }v\in V\mbox{ can be expressed as}</math><br />
<math>{}_{}^{}\mbox{a linear combination of elements in }</math><br />
<math>{}_{}^{}\beta \mbox{ in exactly one way.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof}{}_{}^{}</math><br />
<br />
<math>{}_{}^{}\mbox{It is a combination of things we already know.}</math><br />
#<math>{}_{}^{}\beta\mbox{ generates}</math><br />
#<math>{}_{}^{}\beta\mbox{ is linearly independent}</math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday_October_506-240/Classnotes For Thursday October 52006-10-25T02:08:14Z<p>Wongpak: /* Links to Classnotes */</p>
<hr />
<div>===Links to Classnotes===<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect008.pdf|Week 4 Lecture 2 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-lecture-05-october.pdf|Week 4 Lecture 2 notes]]<br />
<br />
===Scan of Tutorial notes===<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Tut004.pdf|Week 4 Tutorial notes]]<br />
<br />
----<math>\mbox{From last class}{}_{}^{}</math><br />
<br />
<math>M_1=\begin{pmatrix}1&0\\0&0\end{pmatrix},<br />
M_2=\begin{pmatrix}0&1\\0&0\end{pmatrix},<br />
M_3=\begin{pmatrix}0&0\\1&0\end{pmatrix}, <br />
M_4\begin{pmatrix}0&0\\0&1\end{pmatrix}</math><br />
<br />
<math>N_1=\begin{pmatrix}0&1\\1&1\end{pmatrix},<br />
N_2=\begin{pmatrix}1&0\\1&1\end{pmatrix},<br />
N_3=\begin{pmatrix}1&1\\0&1\end{pmatrix}, <br />
N_4\begin{pmatrix}1&1\\1&0\end{pmatrix}</math><br />
<br />
<math>\mbox{The }M_i\mbox{s generate }M_{2\times 2}</math><br />
<br />
<math>\mbox{Fact }T\subset\mbox{ span }S\Rightarrow \mbox{ span }T\subset\mbox{ span }S </math><br />
<br />
<math>S\subset V\mbox{ is linearly independent }\Leftrightarrow \mbox{ whenever }u_i\in S\mbox{ are distinct}</math><br />
<br />
<math>\sum a_iu_i=0\Rightarrow V_ia_i=0 \mbox{ waste not}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Comments}{}_{}^{}</math><br />
#<math>\emptyset\subset V\mbox{ is linearly independent}</math><br />
#<math>\lbrace u\rbrace\mbox{ is linearly independent iff }u_{}^{}\neq 0</math><br />
#<math>\mbox{If }S_1^{}\subset S_2\subset V</math><br />
##<math>\mbox{If }S_1^{}\mbox{ is linearly dependent, so is }S_2</math> <br />
##<math>\mbox{If }S_2^{}\mbox{ is linearly dependent, so is }S_1</math><br />
##<math>\mbox{If }S_1^{}\mbox{ generates }V\mbox{, so does }S_2</math><br />
##<math>\mbox{If }S_2^{}\mbox{ does not generate }V\mbox{ neither does }S_1</math><br />
#<math>\mbox{If }S_{}^{}\mbox{ is linearly independent in }V\mbox{ and }v\notin S\mbox{ then }S\cup\lbrace u\rbrace\mbox{ is linearly independent.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof}{}_{}^{}</math><br />
<br />
<math>\mbox{1.}\Leftarrow:\mbox{ start from second assertion and deduce first.}</math><br />
<br />
<math>\mbox{Assume }v_{}^{}\in \mbox{span }S</math><br />
<math>v=\sum a_iu_i\mbox{ where }u_i\in S, a_i\in F</math><br />
<br />
<math>\sum a_iu_i-1\cdot v=0\mbox{ this is a linear combination of elements in }S\cup v</math><br />
<math>\mbox{ in which not all coefficients are }0 \mbox{ and which add to }0_{}^{}.</math><br />
<math>\mbox{So }S\cup \lbrace v\rbrace\mbox{ is linearly dependent by definition}</math><br />
<br><br />
<math>\mbox{2.}:\Rightarrow\mbox{ Assume }S\cup \lbrace v\rbrace\mbox{ is linearly dependent }\Rightarrow\mbox{ a linear combination can be found, of the form:}</math><br />
<br />
<math>(*)\qquad\sum a_iu_i+bv=0\mbox{ where }u_i\in S\mbox{ and not all of the }a_i \mbox{ and }b \mbox{ are }0</math><br />
<br />
<math>\mbox{If }b=0\mbox{, then }\sum a_iu_i=0\mbox{ and not }a_i\mbox{s are }0 </math><br />
<math>{}_{}^{}\Rightarrow S \mbox{ is linearly dependent}</math><br />
<math>{}_{}^{}\mbox{but initial assumption was }S\mbox{ is linearly independent.}\Rightarrow \mbox{ contradiction so }b\neq0</math><br />
<math>\mbox{So divide by }b\mbox{: (*) becomes }\sum\frac{a_i}{b}u_i + v = 0\Rightarrow v=-\sum\frac{a_i}{b}u_i\Rightarrow v\in \mbox{ span }S</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Definition}{}_{}^{}</math><br />
<br />
<math>{}_{}^{}\mbox{A basis of a vector space }V\mbox{ is a subset }\beta\subset V</math><br />
<math>{}_{}^{}\mbox{such that}</math><br />
#<math>{}_{}^{}\beta\mbox{ generates }V\mbox{ or }V=\mbox{ span }\beta</math><br />
#<math>{}_{}^{}\beta\mbox{ is linearly independent.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Examples}{}_{}^{}</math><br />
<br />
<math>1. \beta=\emptyset{}_{}^{}\mbox{ is a basis of }\lbrace0\rbrace</math><br />
<br />
<math>2. {}_{}^{}V\mbox{ be }\mathbb{R}\mbox{ as a vector space over }\mathbb{R}</math><br />
<math>\qquad{}_{}^{}\beta=\lbrace5\rbrace\mbox{ and }\beta=\lbrace1\rbrace\mbox{ are bases.}</math><br />
<br />
<math>3.{}_{}^{}\mbox{ Let }V\mbox{ be }\mathbb{C}\mbox{ as a vector space over }\mathbb{R} \quad\beta=\lbrace1,i\rbrace</math><br />
<br />
:<math>\qquad{}_{}^{}\mbox{Check}</math><br />
<br />
:<math>\qquad{}_{}^{}\mbox{1. Every complex number is a linear combination of }\beta.</math><br />
::<math>Z=a+bi=a\cdot 1+b\cdot i\mbox{ with coefficients in }\mathbb{R}\mbox{ so }\lbrace1,i\rbrace\mbox{ generates}</math><br />
<br />
:<math>\qquad{}_{}^{}\mbox{2. Show }\beta=\lbrace1,i\rbrace\mbox{ are linearly independent. Assume }a\cdot 1+b\cdot i=0\mbox{ where }a,b\in\mathbb{R}</math><br />
::<math>{}_{}^{}\Rightarrow a+bi=0\Rightarrow a=0\mbox{ and } b=0</math><br />
<br />
<math>{}_{}^{}\mbox{4. }V\in\mathbb{R}^n=<br />
\left\lbrace\begin{pmatrix}\vdots\end{pmatrix}y,\qquad<br />
e_1=\begin{pmatrix}1\\0\\\vdots\\0\end{pmatrix},<br />
e_2=\begin{pmatrix}0\\1\\\vdots\\0\end{pmatrix},\ldots,<br />
e_n=\begin{pmatrix}0\\0\\\vdots\\1\end{pmatrix}\right\rbrace</math><br />
<br />
:<math>{}_{}^{}e_1\ldots e_n\mbox{ are a basis of }V</math><br />
::<math>{}_{}^{}\mbox{They span }\begin{pmatrix}a_1\\\vdots\\a_n\end{pmatrix}=\sum a_ie_i</math><br />
::<math>{}_{}^{}\mbox{They are linearly independent. }\sum a_ie_i=0\Rightarrow \sum a_ie_i=<br />
\begin{pmatrix}a_1\\\vdots\\a_n\end{pmatrix}=0\Rightarrow a_i=0 \quad\forall i</math><br />
<br />
<math>{}_{}^{}\mbox{5. In }V=P_3(\mathbb{R}),\qquad \beta=\lbrace 1,x,x^2,x^3\rbrace</math><br />
<br />
<math>{}_{}^{}\mbox{6. In }V=P_1(\mathbb{R})=\lbrace ax+b\rbrace,\qquad \beta=\lbrace 1+x,1-x\rbrace\mbox{ is a basis}</math><br />
:<math>{}_{}^{}\mbox{1. Generate }</math><br />
::<math>u_1+u_2=2\Rightarrow \frac{1}{2}(u_1+u_2)=1\mbox{ so }1 \in\mbox{ span }S</math><br />
::<math>u_1-u_2=2x\Rightarrow \frac{1}{2}(u_1-u_2)=x\mbox{ so }x \in\mbox{ span }S</math><br />
::<math>{}_{}^{}\mbox{ so span}\lbrace 1,x\rbrace \subset\mbox{ span }\beta</math><br />
:<math>{}_{}^{}\mbox{2. Linearly independent. Assume }au_1+bu_2=0</math><br />
::<math>\Rightarrow a(1+x)+b(1-x)=0\Rightarrow a+b+(a-b)x=0</math><br />
::<math>{}_{}^{}\Rightarrow a+b=0\mbox{ and }a-b=0</math><br />
::<math>(a+b)+(a-b)\Rightarrow 2a=0\Rightarrow a=0</math><br />
::<math>(a+b)-(a-b)\Rightarrow 2b=0\Rightarrow b=0</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Theorem}{}_{}^{}</math><br />
<br />
<math>{}_{}^{}\mbox{A subset }\beta\mbox{ of a vectorspace }V \mbox{ is a basis iff every }v\in V\mbox{ can be expressed as}</math><br />
<math>{}_{}^{}\mbox{a linear combination of elements in }</math><br />
<math>{}_{}^{}\beta \mbox{ in exactly one way.}</math><br />
<br><br />
<br />
<br><br />
<math>\mbox{Proof}{}_{}^{}</math><br />
<br />
<math>{}_{}^{}\mbox{It is a combination of things we already know.}</math><br />
#<math>{}_{}^{}\beta\mbox{ generates}</math><br />
#<math>{}_{}^{}\beta\mbox{ is linearly independent}</math></div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday_October_1006-240/Classnotes For Tuesday October 102006-10-25T02:07:19Z<p>Wongpak: /* Scan of Lecture Notes */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
====Scan of Lecture Notes====<br />
<br />
* PDF file by [[User:Alla]]: [[Media:MAT_Lect009.pdf|Week 5 Lecture 1 notes]]<br />
* PDF file by [[User:Gokmen]]: [[Media:06-240-lecture-10-october.pdf|Week 5 Lecture 1 notes]]<br />
<br />
==A Quick Summary by {{Dror}}==<br />
(Intentionally terse. A sea of details appears in the book and already appeared on the blackboard. But these are useless without some '''organizing principles'''; in some sense, "understanding" is precisely being able to see those principles within the sea of details. Yet don't fool yourself into thinking that the principles are enough even without the details!)<br />
<br />
'''Theorem.''' A finite generating set <math>G</math> has a subset which is a basis.<br />
<br />
'''Proof Sketch.''' Grab more and more elements of <math>G</math> so long as they are linearly independent. When you can't any more, you have a basis.<br />
<br />
'''Lemma.''' (The Replacement Lemma) If <math>G</math> generates and <math>L</math> is linearly independent, then <math>|L|\leq|G|</math> and you can replace <math>|L|</math> of the elements of <math>G</math> by the elements of <math>L</math>, and still have a generating set.<br />
<br />
'''Proof Sketch.''' Insert the elements of <math>L</math> one by one, and for each one that comes in, take one out of <math>G</math>. Which one? One used in expressing the newcomer in terms of the vectors already in <math>G</math>. Such a vector must exist or else the newcomer is a linear combination of some of the elements of <math>L</math>, but <math>L</math> is linearly independent.<br />
<br />
'''Theorem.''' If a vector space <math>V</math> has a finite basis, all bases thereof are finite and have the same number of elements, the "dimension of <math>V</math>".<br />
<br />
'''Proof Sketch.''' By replacement, <math>|\alpha|\leq|\beta|</math> and <math>|\beta|\leq|\alpha|</math>.<br />
<br />
'''Theorem.''' Assume <math>\dim V=n</math>.<br />
# If <math>G</math> generates, <math>|G|\geq n</math>. In case of equality, <math>G</math> is a basis.<br />
# If <math>L</math> is linearly independent, <math>|L|\leq n</math>. In case of equality, <math>L</math> is a basis.<br />
<br />
'''Proof Sketch.'''<br />
# Find a basis within <math>G</math>; it has <math>n</math> elements.<br />
# Use replacement to place the elements of <math>L</math> within some basis.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday_October_1206-240/Classnotes For Thursday October 122006-10-23T14:54:48Z<p>Wongpak: </p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
[[Media:06-240-lecture-12-october.pdf|Week 5 Lecture 2 notes]]<br />
<br />
[[Media:06-240-tutorial-12-october.pdf|Week 5 Tutorial notes]]</div>Wongpakhttp://drorbn.net/index.php?title=Template:06-240/NavigationTemplate:06-240/Navigation2006-10-23T14:52:55Z<p>Wongpak: </p>
<hr />
<div>{| cellpadding="0" cellspacing="0" style="clear: right; float: right"<br />
|- align=right<br />
|<div class="NavFrame"><div class="NavHead">[[06-240]]/[[Template:06-240/Navigation|Navigation Panel]]&nbsp;&nbsp;</div><br />
<div class="NavContent"><br />
{| border="1px" cellpadding="1" cellspacing="0" width="220" style="margin: 0 0 1em 0.5em; font-size: small"<br />
|-<br />
|colspan=3|<b style="color:red; font-size:200%;">NEW!</b> {{Dror}} will hold special office hours today Monday October 23 5-7PM at Bahen 6178.<br />
|-<br />
!#<br />
!Week of...<br />
!Notes and Links<br />
|-<br />
|align=center|1<br />
|Sep 11<br />
|[[06-240/About This Class|About]], [[06-240/Classnotes For Tuesday, September 12|Tue]], [[06-240/Homework Assignment 1|HW1]], [[06-240/Putnam Competition|Putnam]], [[06-240/Classnotes for Thursday, September 14|Thu]]<br />
|-<br />
|align=center|2<br />
|Sep 18<br />
|[[06-240/Classnotes For Tuesday, September 19|Tue]], [[06-240/Homework Assignment 2|HW2]], [[06-240/Classnotes For Thursday, September 21|Thu]]<br />
|-<br />
|align=center|3<br />
|Sep 25<br />
|[[06-240/Classnotes For Tuesday September 26|Tue]], [[06-240/Homework Assignment 3|HW3]], [[06-240/Class Photo|Photo]], [[06-240/Classnotes For Thursday, September 28|Thu]]<br />
|-<br />
|align=center|4<br />
|Oct 2<br />
|[[06-240/Classnotes For Tuesday October 3|Tue]], [[06-240/Homework Assignment 4|HW4]], [[06-240/Classnotes For Thursday October 5|Thu]]<br />
|-<br />
|align=center|5<br />
|Oct 9<br />
|[[06-240/Classnotes For Tuesday October 10|Tue]], [[06-240/Homework Assignment 5|HW5]], [[06-240/Classnotes For Thursday October 12|Thu]]<br />
|-<br />
|align=center|6<br />
|Oct 16<br />
|[[06-240/Linear Algebra - Why We Care|Why?]], [http://en.wikipedia.org/wiki/Isomorphism Iso], [[06-240/Classnotes For Tuesday October 17|Tue]]<br />
|-<br />
|align=center|7<br />
|Oct 23<br />
|Term Test, Extra Hour<br />
|-<br />
|align=center|8<br />
|Oct 30<br />
|HW6<br />
|-<br />
|align=center|9<br />
|Nov 6<br />
|HW7<br />
|-<br />
|align=center|10<br />
|Nov 13<br />
|HW8<br />
|-<br />
|align=center|11<br />
|Nov 20<br />
|HW9<br />
|-<br />
|align=center|12<br />
|Nov 27<br />
|HW10<br />
|-<br />
|align=center|13<br />
|Dec 4<br />
|<br />
|-<br />
|align=center|F<br />
|Dec 11<br />
|Final: Dec 13 2-5PM at BN3<br />
|-<br />
|colspan=3 align=center|[[06-240/Register of Good Deeds|Register of Good Deeds]]<br />
|-<br />
|colspan=3 align=center|[[Image:06-240-ClassPhoto.jpg|180px]]<br>[[06-240/Class Photo|Add your name / see who's in!]]<br />
|}<br />
</div></div><br />
|}</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Tuesday_October_1706-240/Classnotes For Tuesday October 172006-10-23T13:53:03Z<p>Wongpak: </p>
<hr />
<div>[[Media:06-240-lecture-october17.pdf|Week 6 Lecture 1 notes]]</div>Wongpakhttp://drorbn.net/index.php?title=Template:06-240/NavigationTemplate:06-240/Navigation2006-10-23T13:50:19Z<p>Wongpak: </p>
<hr />
<div>{| cellpadding="0" cellspacing="0" style="clear: right; float: right"<br />
|- align=right<br />
|<div class="NavFrame"><div class="NavHead">[[06-240]]/[[Template:06-240/Navigation|Navigation Panel]]&nbsp;&nbsp;</div><br />
<div class="NavContent"><br />
{| border="1px" cellpadding="1" cellspacing="0" width="220" style="margin: 0 0 1em 0.5em; font-size: small"<br />
|-<br />
|colspan=3|<b style="color:red; font-size:200%;">NEW!</b> {{Dror}} will hold special office hours today Monday October 23 5-7PM at Bahen 6178.<br />
|-<br />
!#<br />
!Week of...<br />
!Notes and Links<br />
|-<br />
|align=center|1<br />
|Sep 11<br />
|[[06-240/About This Class|About]], [[06-240/Classnotes For Tuesday, September 12|Tue]], [[06-240/Homework Assignment 1|HW1]], [[06-240/Putnam Competition|Putnam]], [[06-240/Classnotes for Thursday, September 14|Thu]]<br />
|-<br />
|align=center|2<br />
|Sep 18<br />
|[[06-240/Classnotes For Tuesday, September 19|Tue]], [[06-240/Homework Assignment 2|HW2]], [[06-240/Classnotes For Thursday, September 21|Thu]]<br />
|-<br />
|align=center|3<br />
|Sep 25<br />
|[[06-240/Classnotes For Tuesday September 26|Tue]], [[06-240/Homework Assignment 3|HW3]], [[06-240/Class Photo|Photo]], [[06-240/Classnotes For Thursday, September 28|Thu]]<br />
|-<br />
|align=center|4<br />
|Oct 2<br />
|[[06-240/Classnotes For Tuesday October 3|Tue]], [[06-240/Homework Assignment 4|HW4]], [[06-240/Classnotes For Thursday October 5|Thu]]<br />
|-<br />
|align=center|5<br />
|Oct 9<br />
|[[06-240/Classnotes For Tuesday October 10|Tue]], [[06-240/Homework Assignment 5|HW5]]<br />
|-<br />
|align=center|6<br />
|Oct 16<br />
|[[06-240/Linear Algebra - Why We Care|Why?]], [http://en.wikipedia.org/wiki/Isomorphism Iso], [[06-240/Classnotes For Tuesday October 17|Tue]]<br />
|-<br />
|align=center|7<br />
|Oct 23<br />
|Term Test, Extra Hour<br />
|-<br />
|align=center|8<br />
|Oct 30<br />
|HW6<br />
|-<br />
|align=center|9<br />
|Nov 6<br />
|HW7<br />
|-<br />
|align=center|10<br />
|Nov 13<br />
|HW8<br />
|-<br />
|align=center|11<br />
|Nov 20<br />
|HW9<br />
|-<br />
|align=center|12<br />
|Nov 27<br />
|HW10<br />
|-<br />
|align=center|13<br />
|Dec 4<br />
|<br />
|-<br />
|align=center|F<br />
|Dec 11<br />
|Final: Dec 13 2-5PM at BN3<br />
|-<br />
|colspan=3 align=center|[[06-240/Register of Good Deeds|Register of Good Deeds]]<br />
|-<br />
|colspan=3 align=center|[[Image:06-240-ClassPhoto.jpg|180px]]<br>[[06-240/Class Photo|Add your name / see who's in!]]<br />
|}<br />
</div></div><br />
|}</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Homework_Assignment_5Talk:06-240/Homework Assignment 52006-10-16T11:06:14Z<p>Wongpak: </p>
<hr />
<div>For the test, do we have to know the LaGrange formula? Although not covered in class, it is in Section 1.6, which we've been asked to read.<br />
<br />
The test material will only be announced on Tuesday. --[[User:Drorbn|Drorbn]] 13:02, 14 October 2006 (EDT)<br />
<br />
For question 28: "Let V be a finite-dimensional vector space over C with dimension n. Prove that if V is now ''regarded as a vector space over R'', then dim V = 2n"...<br />
Is this a formally defined concept? (that is, while it is obvious what they mean, how could you state it rigorously)<br />
<br />
<math>{\mathbb R}</math> is a subset of <math>{\mathbb C}</math>, so if you know how to multiply by scalars in <math>{\mathbb C}</math>, you automatically know how to multiply by scalar in <math>{\mathbb R}</math>. Thus every vector space over <math>{\mathbb C}</math> is also a vector space over <math>{\mathbb R}</math> (and in the same way, also over <math>{\mathbb C}</math>). --[[User:Drorbn|Drorbn]] 22:01, 14 October 2006 (EDT)<br />
<br />
Hi, I think this site might help. http://mathforum.org/library/drmath/view/51973.html. [[User:Wongpak|Wongpak]] 07:06, 16 October 2006 (EDT)</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Homework_Assignment_4Talk:06-240/Homework Assignment 42006-10-05T13:38:06Z<p>Wongpak: </p>
<hr />
<div>== Divisibility by Prime Number==<br />
<br />
Pls correct me if I were wrong.<br />
The operation of cut away the unit digit is a distraction. If we consider the unit digit, the operation basically is a deduction of a number, and that number is divisible by 7.<br />
The whole operation is shown as follow:<br />
<br />
{|<br />
|<div align="right">8641<s>5</s><br />
|-<br />
|<div align="right"> 10<s>5</s><br />
|&nbsp;&nbsp;105/7=21<br />
|-<br />
|<div align="right"> ----<br />
|-<br />
|<div align="right">863<s>1</s></div><br />
|-<br />
|<div align="right">2<s>1</s></div><br />
|&nbsp;&nbsp;21/7=3<br />
|-<br />
|<div align="right"> ---</div><br />
|-<br />
|<div align="right">86<s>1</s></div><br />
|-<br />
|<div align="right">2<s>1</s></div><br />
|&nbsp;&nbsp;21/7=3<br />
|-<br />
|<div align="right"> --</div><br />
|-<br />
|<div align="right">8<s>4</s></div><br />
|-<br />
|<div align="right">8<s>4</s></div><br />
|&nbsp;&nbsp;84/7=12<br />
|-<br />
|<div align="right"> -</div><br />
|-<br />
|<div align="right">0</div><br />
|&nbsp;&nbsp;0/7=0<br />
|}<br />
<br />
Since it is an operation of series subtraction by multiples of 7, therefore the number we started from is divisible by 7 iff the resulting number is divisible by 7.<br />
<br />
Moreover, there is a relationship between the unit digit, 2, and 7. The unit digit multiple by 21(7 <math> \times </math> 3) is equal to the combination of the unit digit with its 2-time as the tenth/hundredth digit.<br />
{|border="1"<br />
!Unit digit, <math>x</math> !! <math>x \times 3 \times 7</math><br />
|-<br />
|0 <br />
|0<br />
|-<br />
|1 <br />
|21<br />
|-<br />
|2 <br />
|42<br />
|-<br />
|3 <br />
|63<br />
|-<br />
|4 <br />
|84<br />
|-<br />
|5 <br />
|105<br />
|-<br />
|6 <br />
|126<br />
|-<br />
|7 <br />
|147<br />
|-<br />
|8 <br />
|168<br />
|-<br />
|9 <br />
|189<br />
|}<br />
<br />
From the table above, I've induced the criterion for divisibility by 17 that is similar operation but the unit digit multiplies by 5 instead of 2. For divisibility by 13, the unit digit multiple by 9. Alright, I think it will be more fun if it's explained by other people. [[User:Wongpak|Wongpak]] 09:38, 5 October 2006 (EDT)</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday,_September_2806-240/Classnotes For Thursday, September 282006-09-30T00:59:26Z<p>Wongpak: </p>
<hr />
<div>===Linear Combination===<br />
Definition: Let (''u''<sub>i</sub>) = (''u''<sub>1</sub>, ''u''<sub>2</sub>, ..., ''u''<sub>n</sub>) be a sequence of vectors in V. A sum of the form<br><br />
::''a''<sub>i</sub> <math> \in </math> F, <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> = ''a''<sub>1</sub>''u''<sub>1</sub> + ''a''<sub>2</sub>''u''<sub>2</sub>+ ... +''a''<sub>n</sub>''u''<sub>n</sub><br />
<br />
is called a "Linear Combination" of the ''u''<sub>i</sub>.<br />
<br />
===Span===<br />
span(''u''<sub>i</sub>):= The set of all possible linear combinations of the ''u''<sub>i</sub>'s.<br />
<br />
<br />
If <math>\mathcal{S} \subseteq</math> V is any subset,<br />
: <br />
{| border="0" cellpadding="0" cellspacing="0"<br />
|-<br />
|span <math>\mathcal{S}</math><br />
|:= The set of all linear combination of vectors in <math>\mathcal{S}</math><br />
|-<br />
|<br />
|=<math>\left \{ \sum_{i=0}^n a_i u_i, a_i \in \mbox{F}, u_i \in \mathcal{S} \right \} \ni 0</math> <br />
|}<br />
<br />
even if <math>\mathcal{S}</math> is empty.<br />
<br />
'''Theorem''': For any <math>\mathcal{S} \subseteq</math> V, span <math>\mathcal{S}</math> is a subspace of V.<br />
<br />
Proof:<br><br />
1. 0 <math> \in </math> span <math>\mathcal{S}</math>.<br><br />
2. Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>, Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>,<br />
<math>\Rightarrow</math> ''x'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>, ''u''<sub>i</sub> <math> \in \mathcal{S}</math>, ''y'' = <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub>, ''v''<sub>i</sub> <math> \in \mathcal{S}</math>.<br />
<math>\Rightarrow</math> ''x''+''y'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> + <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub> = <math>\sum_{i=1}^{m+n}</math> ''c''<sub>i</sub>''w''<sub>i</sub> where ''c''<sub>i</sub>=(''a''<sub>1</sub>, ''a''<sub>2</sub>,...,''a''<sub>n</sub>, ''b''<sub>1</sub>, ''b''<sub>2</sub>,...,''b''<sub>m</sub>) and ''w''<sub>i</sub>=''c''<sub>i</sub>=(''u''<sub>1</sub>, ''u''<sub>2</sub>,...,''u''<sub>n</sub>, ''v''<sub>1</sub>, ''v''<sub>2</sub>,...,''v''<sub>m</sub>).<br><br />
3. ''cx''= c<math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>=<math>\sum_{i=1}^n</math> (''ca''<sub>i</sub>)''u''<sub>i</sub><math>\in </math> span <math>\mathcal{S}</math>.<br />
<br />
<br />
''Example''<br />
1. Let P<sub>3</sub>(<math>\Re</math>)={ax<sup>3</sup>+bx<sup>2</sup>+cx+d}<math>\subseteq</math>P(<math>\Re</math>), ''a'', ''b'', ''c'', ''d'', <math>\in \Re</math>.<BR><br />
''u''<sub>1</sub>=''x''<sup>3</sup>-2''x''<sup>2</sup>-5''x''-3<BR><br />
''u''<sub>2</sub>=3''x''<sup>3</sup>-5''x''<sup>2</sup>-4''x''-9<BR><br />
''v''=2''x''<sup>3</sup>-2''x''<sup>2</sup>+12''x''-6<BR><br />
Let W=spab(''u''<sub>1</sub>, ''u''<sub>2</sub>), <BR><br />
Does ''v'' <math> \in </math> W?<BR><br />
''v'' is in W if ''v''=''a''<sub>1</sub>''u''<sub>1</sub>+''a''<sub>1</sub>''u''<sub>2</sub><br> for some ''a''<sub>1</sub>, ''a''<sub>2</sub> <math> \in \Re </math>.<br />
<br />
If <math>\exists</math> ''a''<sub>1</sub>, ''a''<sub>2</sub> <math>\in \Re</math>, <br><br />
{| border="0" cellpadding="0" cellspacing="0" align="center"<br />
|-<br />
|2''x''<sup>3</sup>-2''x''<sup>2</sup>+12''x''-6<br />
|= ''a''<sub>1</sub>(''x''-2''x''<sup>2</sup>-5''x''-3) + ''a''<sub>2</sub>(3''x''<sup>3</sup>-5''x''<sup>2</sup>-4''x''-9)<br />
|<br />
|-<br />
|<br />
|=(''a''<sub>1</sub>+3''a''<sub>2</sub>)''x''<sup>3</sup> + (-2''a''<sub>1</sub> -5''a''<sub>2</sub>)''x''<sup>2</sup> + (-5''a''<sub>1</sub>-4''a''<sub>2</sub>)''x'' + (-3''a''<sub>1</sub>-9''a''<sub>2</sub>)<br />
|<br />
|-<br />
|&nbsp; <br />
|<br />
|<br />
|-<br />
|<div align="right"><math>\Leftrightarrow</math>2</div><br />
|=''a''<sub>1</sub>+3''a''<sub>2</sub><br />
|<br />
|-<br />
|<div align="right">-2</div><br />
|=-2''a''<sub>1</sub>-5''a''<sub>2</sub><br />
|<br />
|-<br />
|<div align="right">12</div><br />
|=-5''a''<sub>1</sub>-4''a''<sub>2</sub><br />
|<br />
|-<br />
|<div align="right">-6</div><br />
|=-3''a''<sub>1</sub>-9''a''<sub>2</sub><br />
|<br />
|}<br />
Solve the four equations above and we will get ''a''<sub>1</sub>=-4 and ''a''<sub>2</sub>=2.<br><br />
Check if ''a''<sub>1</sub>=-4 and ''a''<sub>2</sub>=2 hold for all the 4 equations.<br><br />
Since it's hold, <math>\Rightarrow</math> ''v'' <math>\in</math> W.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday,_September_2806-240/Classnotes For Thursday, September 282006-09-29T18:07:46Z<p>Wongpak: /* Span */</p>
<hr />
<div>===Linear Combination===<br />
Definition: Let (''u''<sub>i</sub>) = (''u''<sub>1</sub>, ''u''<sub>2</sub>, ..., ''u''<sub>n</sub>) be a sequence of vectors in V. A sum of the form<br><br />
::''a''<sub>i</sub> <math> \in </math> F, <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> = ''a''<sub>1</sub>''u''<sub>1</sub> + ''a''<sub>2</sub>''u''<sub>2</sub>+ ... +''a''<sub>n</sub>''u''<sub>n</sub><br />
<br />
is called a "Linear Combination" of the ''u''<sub>i</sub>.<br />
<br />
===Span===<br />
span(''u''<sub>i</sub>):= The set of all possible linear combinations of the ''u''<sub>i</sub>'s.<br />
<br />
<br />
If <math>\mathcal{S} \subseteq</math> V is any subset,<br />
: <br />
{| border="0" cellpadding="0" cellspacing="0"<br />
|-<br />
|span <math>\mathcal{S}</math><br />
|:= The set of all linear combination of vectors in <math>\mathcal{S}</math><br />
|-<br />
|<br />
|=<math>\left \{ \sum_{i=0}^n a_i u_i, a_i \in \mbox{F}, u_i \in \mathcal{S} \right \} \ni 0</math> <br />
|}<br />
<br />
even if <math>\mathcal{S}</math> is empty.<br />
<br />
'''Theorem''': For any <math>\mathcal{S} \subseteq</math> V, span <math>\mathcal{S}</math> is a subspace of V.<br />
<br />
Proof:<br><br />
1. 0 <math> \in </math> span <math>\mathcal{S}</math>.<br><br />
2. Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>, Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>,<br />
<math>\Rightarrow</math> ''x'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>, ''u''<sub>i</sub> <math> \in \mathcal{S}</math>, ''y'' = <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub>, ''v''<sub>i</sub> <math> \in \mathcal{S}</math>.<br />
<math>\Rightarrow</math> ''x''+''y'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> + <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub> = <math>\sum_{i=1}^{m+n}</math> ''c''<sub>i</sub>''w''<sub>i</sub> where ''c''<sub>i</sub>=(''a''<sub>1</sub>, ''a''<sub>2</sub>,...,''a''<sub>n</sub>, ''b''<sub>1</sub>, ''b''<sub>2</sub>,...,''b''<sub>m</sub>) and ''w''<sub>i</sub>=''c''<sub>i</sub>=(''u''<sub>1</sub>, ''u''<sub>2</sub>,...,''u''<sub>n</sub>, ''v''<sub>1</sub>, ''v''<sub>2</sub>,...,''v''<sub>m</sub>).<br><br />
3. ''cx''= c<math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>=<math>\sum_{i=1}^n</math> (''ca''<sub>i</sub>)''u''<sub>i</sub><math>\in </math> span <math>\mathcal{S}</math>.<br />
<br />
<br />
To be continued ...</div>Wongpakhttp://drorbn.net/index.php?title=User:WongpakUser:Wongpak2006-09-29T18:06:49Z<p>Wongpak: </p>
<hr />
<div></div>Wongpakhttp://drorbn.net/index.php?title=User:WongpakUser:Wongpak2006-09-29T18:06:39Z<p>Wongpak: </p>
<hr />
<div><br />
<br />
===Span===<br />
span(''u''<sub>i</sub>):= The set of all possible linear combinations of the ''u''<sub>i</sub>'s.<br />
<br />
<br />
If <math>\mathcal{S} \subseteq</math> V is any subset,<br />
: <br />
{| border="0" cellpadding="0" cellspacing="0"<br />
|-<br />
|span <math>\mathcal{S}</math><br />
|:= The set of all linear combination of vectors in <math>\mathcal{S}</math><br />
|-<br />
|<br />
|=<math>\left \{ \sum_{i=0}^n a_i u_i, a_i \in \mbox{F}, u_i \in \mathcal{S} \right \} \ni 0</math> <br />
|}<br />
<br />
even if <math>\mathcal{S}</math> is empty.<br />
<br />
'''Theorem''': For any <math>\mathcal{S} \subseteq</math> V, span <math>\mathcal{S}</math> is a subspace of V.<br />
<br />
Proof:<br><br />
1. 0 <math> \in </math> span <math>\mathcal{S}</math>.<br><br />
2. Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>, Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>,<br />
<math>\Rightarrow</math> ''x'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>, ''u''<sub>i</sub> <math> \in \mathcal{S}</math>, ''y'' = <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub>, ''v''<sub>i</sub> <math> \in \mathcal{S}</math>.<br />
<math>\Rightarrow</math> ''x''+''y'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> + <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub> = <math>\sum_{i=1}^{m+n}</math> ''c''<sub>i</sub>''w''<sub>i</sub> where ''c''<sub>i</sub>=(''a''<sub>1</sub>, ''a''<sub>2</sub>,...,''a''<sub>n</sub>, ''b''<sub>1</sub>, ''b''<sub>2</sub>,...,''b''<sub>m</sub>) and ''w''<sub>i</sub>=''c''<sub>i</sub>=(''u''<sub>1</sub>, ''u''<sub>2</sub>,...,''u''<sub>n</sub>, ''v''<sub>1</sub>, ''v''<sub>2</sub>,...,''v''<sub>m</sub>).<br><br />
3. ''cx''= c<math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>=<math>\sum_{i=1}^n</math> (''ca''<sub>i</sub>)''u''<sub>i</sub><math>\in </math> span <math>\mathcal{S}</math>.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday,_September_2806-240/Classnotes For Thursday, September 282006-09-29T18:06:24Z<p>Wongpak: /* Linear Combination */</p>
<hr />
<div>===Linear Combination===<br />
Definition: Let (''u''<sub>i</sub>) = (''u''<sub>1</sub>, ''u''<sub>2</sub>, ..., ''u''<sub>n</sub>) be a sequence of vectors in V. A sum of the form<br><br />
::''a''<sub>i</sub> <math> \in </math> F, <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> = ''a''<sub>1</sub>''u''<sub>1</sub> + ''a''<sub>2</sub>''u''<sub>2</sub>+ ... +''a''<sub>n</sub>''u''<sub>n</sub><br />
<br />
is called a "Linear Combination" of the ''u''<sub>i</sub>.<br />
<br />
===Span===<br />
span(''u''<sub>i</sub>):= The set of all possible linear combinations of the ''u''<sub>i</sub>'s.<br />
<br />
<br />
If <math>\mathcal{S} \subseteq</math> V is any subset,<br />
: <br />
{| border="0" cellpadding="0" cellspacing="0"<br />
|-<br />
|span <math>\mathcal{S}</math><br />
|:= The set of all linear combination of vectors in <math>\mathcal{S}</math><br />
|-<br />
|<br />
|=<math>\left \{ \sum_{i=0}^n a_i u_i, a_i \in \mbox{F}, u_i \in \mathcal{S} \right \} \ni 0</math> <br />
|}<br />
<br />
even if <math>\mathcal{S}</math> is empty.<br />
<br />
'''Theorem''': For any <math>\mathcal{S} \subseteq</math> V, span <math>\mathcal{S}</math> is a subspace of V.<br />
<br />
Proof:<br><br />
1. 0 <math> \in </math> span <math>\mathcal{S}</math>.<br><br />
2. Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>, Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>,<br />
<math>\Rightarrow</math> ''x'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>, ''u''<sub>i</sub> <math> \in \mathcal{S}</math>, ''y'' = <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub>, ''v''<sub>i</sub> <math> \in \mathcal{S}</math>.<br />
<math>\Rightarrow</math> ''x''+''y'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> + <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub> = <math>\sum_{i=1}^{m+n}</math> ''c''<sub>i</sub>''w''<sub>i</sub> where ''c''<sub>i</sub>=(''a''<sub>1</sub>, ''a''<sub>2</sub>,...,''a''<sub>n</sub>, ''b''<sub>1</sub>, ''b''<sub>2</sub>,...,''b''<sub>m</sub>) and ''w''<sub>i</sub>=''c''<sub>i</sub>=(''u''<sub>1</sub>, ''u''<sub>2</sub>,...,''u''<sub>n</sub>, ''v''<sub>1</sub>, ''v''<sub>2</sub>,...,''v''<sub>m</sub>).<br><br />
3. ''cx''= c<math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>=<math>\sum_{i=1}^n</math> (''ca''<sub>i</sub>)''u''<sub>i</sub><math>\in </math> span <math>\mathcal{S}</math>.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Classnotes_For_Thursday,_September_2806-240/Classnotes For Thursday, September 282006-09-29T18:05:51Z<p>Wongpak: </p>
<hr />
<div>===Linear Combination===<br />
Definition: Let (''u''<sub>i</sub>) = (''u''<sub>1</sub>, ''u''<sub>2</sub>, ..., ''u''<sub>n</sub>) be a sequence of vectors in V. A sum of the form<br><br />
::''a''<sub>i</sub> <math> \in </math> F, <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> = ''a''<sub>1</sub>''u''<sub>1</sub> + ''a''<sub>2</sub>''u''<sub>2</sub>+ ... +''a''<sub>n</sub>''u''<sub>n</sub><br />
<br />
is called a "Linear Combination" of the ''u''<sub>i</sub>.</div>Wongpakhttp://drorbn.net/index.php?title=Template:06-240/NavigationTemplate:06-240/Navigation2006-09-29T18:05:34Z<p>Wongpak: </p>
<hr />
<div>{| cellpadding="0" cellspacing="0" style="clear: right; float: right"<br />
|- align=right<br />
|<div class="NavFrame"><div class="NavHead">[[06-240]]/[[Template:06-240/Navigation|Navigation Panel]]&nbsp;&nbsp;</div><br />
<div class="NavContent"><br />
{| border="1px" cellpadding="1" cellspacing="0" width="220" style="margin: 0 0 1em 0.5em; font-size: small"<br />
|-<br />
!#<br />
!Week of...<br />
!Notes and Links<br />
|-<br />
|align=center|1<br />
|Sep 11<br />
|[[06-240/About This Class|About]], [[06-240/Classnotes For Tuesday, September 12|Tue]], [[06-240/Homework Assignment 1|HW1]], [[06-240/Putnam Competition|Putnam]], [[06-240/Classnotes for Thursday, September 14|Thu]]<br />
|-<br />
|align=center|2<br />
|Sep 18<br />
|[[06-240/Classnotes For Tuesday, September 19|Tue]], [[06-240/Homework Assignment 2|HW2]], [[06-240/Classnotes For Thursday, September 21|Thu]]<br />
|-<br />
|align=center|3<br />
|Sep 25<br />
|[[06-240/Classnotes For Tuesday September 26|Tue]], [[06-240/Homework Assignment 3|HW3]], [[06-240/Class Photo|Photo]], [[06-240/Classnotes For Thursday, September 28|Thu]]<br />
|-<br />
|align=center|4<br />
|Oct 2<br />
|HW4<br />
|-<br />
|align=center|5<br />
|Oct 9<br />
|HW5<br />
|-<br />
|align=center|6<br />
|Oct 16<br />
|<br />
|-<br />
|align=center|7<br />
|Oct 23<br />
|Term Test<br />
|-<br />
|align=center|8<br />
|Oct 30<br />
|HW6<br />
|-<br />
|align=center|9<br />
|Nov 6<br />
|HW7<br />
|-<br />
|align=center|10<br />
|Nov 13<br />
|HW8<br />
|-<br />
|align=center|11<br />
|Nov 20<br />
|HW9<br />
|-<br />
|align=center|12<br />
|Nov 27<br />
|HW10<br />
|-<br />
|align=center|13<br />
|Dec 4<br />
|<br />
|-<br />
|colspan=3 align=center|[[06-240/Register of Good Deeds|Register of Good Deeds]]<br />
|-<br />
|colspan=3 align=center|[[Image:06-240-ClassPhoto.jpg|180px]]<br>[[06-240/Class Photo|Add your name / see who's in!]]<br />
|}<br />
</div></div><br />
|}</div>Wongpakhttp://drorbn.net/index.php?title=User:WongpakUser:Wongpak2006-09-29T15:56:50Z<p>Wongpak: /* Span */</p>
<hr />
<div>===Linear Combination===<br />
Definition: Let (''u''<sub>i</sub>) = (''u''<sub>1</sub>, ''u''<sub>2</sub>, ..., ''u''<sub>n</sub>) be a sequence of vectors in V. A sum of the form<br><br />
::''a''<sub>i</sub> <math> \in </math> F, <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> = ''a''<sub>1</sub>''u''<sub>1</sub> + ''a''<sub>2</sub>''u''<sub>2</sub>+ ... +''a''<sub>n</sub>''u''<sub>n</sub><br />
<br />
is called a "Linear Combination" of the ''u''<sub>i</sub>.<br />
<br />
===Span===<br />
span(''u''<sub>i</sub>):= The set of all possible linear combinations of the ''u''<sub>i</sub>'s.<br />
<br />
<br />
If <math>\mathcal{S} \subseteq</math> V is any subset,<br />
: <br />
{| border="0" cellpadding="0" cellspacing="0"<br />
|-<br />
|span <math>\mathcal{S}</math><br />
|:= The set of all linear combination of vectors in <math>\mathcal{S}</math><br />
|-<br />
|<br />
|=<math>\left \{ \sum_{i=0}^n a_i u_i, a_i \in \mbox{F}, u_i \in \mathcal{S} \right \} \ni 0</math> <br />
|}<br />
<br />
even if <math>\mathcal{S}</math> is empty.<br />
<br />
'''Theorem''': For any <math>\mathcal{S} \subseteq</math> V, span <math>\mathcal{S}</math> is a subspace of V.<br />
<br />
Proof:<br><br />
1. 0 <math> \in </math> span <math>\mathcal{S}</math>.<br><br />
2. Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>, Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>,<br />
<math>\Rightarrow</math> ''x'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>, ''u''<sub>i</sub> <math> \in \mathcal{S}</math>, ''y'' = <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub>, ''v''<sub>i</sub> <math> \in \mathcal{S}</math>.<br />
<math>\Rightarrow</math> ''x''+''y'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> + <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub> = <math>\sum_{i=1}^{m+n}</math> ''c''<sub>i</sub>''w''<sub>i</sub> where ''c''<sub>i</sub>=(''a''<sub>1</sub>, ''a''<sub>2</sub>,...,''a''<sub>n</sub>, ''b''<sub>1</sub>, ''b''<sub>2</sub>,...,''b''<sub>m</sub>) and ''w''<sub>i</sub>=''c''<sub>i</sub>=(''u''<sub>1</sub>, ''u''<sub>2</sub>,...,''u''<sub>n</sub>, ''v''<sub>1</sub>, ''v''<sub>2</sub>,...,''v''<sub>m</sub>).<br><br />
3. ''cx''= c<math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>=<math>\sum_{i=1}^n</math> (''ca''<sub>i</sub>)''u''<sub>i</sub><math>\in </math> span <math>\mathcal{S}</math>.</div>Wongpakhttp://drorbn.net/index.php?title=User:WongpakUser:Wongpak2006-09-29T15:53:36Z<p>Wongpak: </p>
<hr />
<div>===Linear Combination===<br />
Definition: Let (''u''<sub>i</sub>) = (''u''<sub>1</sub>, ''u''<sub>2</sub>, ..., ''u''<sub>n</sub>) be a sequence of vectors in V. A sum of the form<br><br />
::''a''<sub>i</sub> <math> \in </math> F, <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> = ''a''<sub>1</sub>''u''<sub>1</sub> + ''a''<sub>2</sub>''u''<sub>2</sub>+ ... +''a''<sub>n</sub>''u''<sub>n</sub><br />
<br />
is called a "Linear Combination" of the ''u''<sub>i</sub>.<br />
<br />
===Span===<br />
span(''u''<sub>i</sub>):= The set of all possible linear combinations of the ''u''<sub>i</sub>'s.<br />
<br />
<br />
If <math>\mathcal{S} \subseteq</math> V is any subset,<br />
: <br />
{| border="0" cellpadding="0" cellspacing="0"<br />
|-<br />
|span <math>\mathcal{S}</math><br />
|:= The set of all linear combination of vectors in <math>\mathcal{S}</math><br />
|-<br />
|<br />
|=<math>\left \{ \sum_{i=0}^n a_i u_i, a_i \in \mbox{F}, u_i \in \mathcal{S} \right \} \ni 0</math> <br />
|}<br />
<br />
even if <math>\mathcal{S}</math> is empty.<br />
<br />
'''Theorem''': For any <math>\mathcal{S} \subseteq</math> V, span <math>\mathcal{S}</math> is a subspace of V.<br />
Proof:<br />
# 0 <math> \in </math> span <math>\mathcal{S}</math>,<br />
# Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>, Let ''x'' <math> \in </math> span <math>\mathcal{S}</math>,<br />
<math>\Rightarrow</math> ''x'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>, ''u''<sub>i</sub> <math> \in \mathcal{S}</math>, ''y'' = <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub>, ''v''<sub>i</sub> <math> \in \mathcal{S}</math>.<br />
<math>\Rightarrow</math> ''x''+''y'' = <math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub> + <math>\sum_{i=1}^m</math> ''b''<sub>i</sub>''v''<sub>i</sub> = <math>\sum_{i=1}^{m+n}</math> ''c''<sub>i</sub>''w''<sub>i</sub> where ''c''<sub>i</sub>=(''a''<sub>1</sub>, ''a''<sub>2</sub>,...,''a''<sub>n</sub>, ''b''<sub>1</sub>, ''b''<sub>2</sub>,...,''b''<sub>m</sub>) and ''w''<sub>i</sub>=''c''<sub>i</sub>=(''u''<sub>1</sub>, ''u''<sub>2</sub>,...,''u''<sub>n</sub>, ''v''<sub>1</sub>, ''v''<sub>2</sub>,...,''v''<sub>m</sub>).# ''cx''= c<math>\sum_{i=1}^n</math> ''a''<sub>i</sub>''u''<sub>i</sub>=<math>\sum_{i=1}^n</math> (''ca''<sub>i</sub>)''u''<sub>i</sub><math>\in </math> span <math>\mathcal{S}</math>.</div>Wongpakhttp://drorbn.net/index.php?title=06-240/Class_Photo06-240/Class Photo2006-09-29T14:20:13Z<p>Wongpak: /* Who We Are */</p>
<hr />
<div>{{06-240/Navigation}}<br />
<br />
Our class on September 28, 2006:<br />
<br />
[[Image:06-240-ClassPhoto.jpg|thumb|center|500px|Class Photo: click to enlarge]]<br />
<br />
Please identify yourself in this photo! There are two ways to do that:<br />
<br />
* [[Special:Userlogin|Log in]] to this Wiki and edit this page. Put your name, userid, email address and location in the picture in the alphabetical list below.<br />
* Send [[User:Drorbn|Dror]] an email message with this information.<br />
<br />
The first option is more fun but less private.<br />
<br />
===Who We Are===<br />
<br />
{| align=center border=1<br />
|-<br />
!First Name<br />
!Last Name<br />
!UserID<br />
!Email<br />
!In Photo<br />
!Comments<br />
{{Photo Entry|last=Bar-Natan|first=Dror|userid=Drorbn|email=drorbn@ math.toronto.edu|location=facing everybody, as the photographer|comments=Take this entry as a model and leave it first. Otherwise alphabetize by last name. Feel free to leave some fields blank}}<br />
{{Photo Entry|last=Carberry|first=Mick|userid=MC|email=Mick.Carberry@utoronto.ca|location=long haired, bearded old guy in back|comments= }}<br />
{{Photo Entry|last=Wong|first=Pak|userid=wongpak|email=plwong@utoronto.ca|location=Third row from the back, left most, black shirt|comments= }}<br />
|}</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Homework_Assignment_1Talk:06-240/Homework Assignment 12006-09-26T12:28:25Z<p>Wongpak: </p>
<hr />
<div>What information should be included on the homework assignments besides the answers to the assignment? <br />
Is student name, Math 240, Homework Assignment 1 and date sufficient?<br />
MC<br />
<br />
Yes.<br />
--[[User:Drorbn|Drorbn]] 14:50, 15 September 2006 (EDT)<br />
<br />
== Q4 ==<br />
<br />
i have a question on Q4. for the part a^-1=a^2, if it's true, then a*a^2=1, which makes a=1....but a can't be 1 right?<br />
<br />
I don't see why <math>a*a^2=1</math> implies <math>a=1</math>. --[[User:Drorbn|Drorbn]] 06:16, 22 September 2006 (EDT)<br />
<br />
because <math>b=a^{-1}=a^2</math>, if ab=1, why shouldn't <math>a*a^2=1</math>?<br />
<br />
But what's wrong with that? --[[User:Drorbn|Drorbn]] 17:16, 22 September 2006 (EDT)<br />
<br />
Finally I'm registered.....ok, if <math>a*a^2=1</math>, then a=1,but a field cannot have identical elements.....or can it?.........btw why is your name shown here but mine not?...never used a wiki based site....<br />
<br />
Repeat: I don't see why <math>a*a^2=1</math> implies <math>a=1</math>. --[[User:Drorbn|Drorbn]] 03:24, 23 September 2006 (EDT)<br />
<br />
er....since <math>a*a^2=a^3=1</math>, or am I right about <math>a*a^2=a^3</math>?....and what makes <math>a^3=1</math> except a=1?...sorry but please tell me where I got wrong.........<br />
<br />
Well, OUR very own field has an element <math>a</math> for which <math>a^3=1</math> yet <math>a\neq 1</math>... --[[User:Drorbn|Drorbn]] 17:08, 23 September 2006 (EDT)<br />
<br />
ok.....that's....very...convincing......I'll shut up...<br />
<br />
You seem unhappy, but I actually meant what I said. The equality <math>a^3=1</math> in a general field does not imply the equality <math>a\neq 1</math> --- why would it? After all, <math>a^2=1</math> does not imply <math>a\neq 1</math> either. Here are two examples for fields in which there is an <math>a\neq 1</math> for which <math>a^3=1</math>:<br />
# Our field and our <math>a</math>.<br />
# The complex numbers <math>{\mathbb C}</math> and <math>a=-\frac12+\frac{\sqrt{3}}{2}i</math>.<br />
--[[User:Drorbn|Drorbn]] 17:38, 24 September 2006 (EDT)<br />
<br />
Actually I guessed it had something to do with the field. But this concept is still new to me, I just can't convice myself a is not 1 when a*a*a=1...But that example of complex numbers is indeed very convincing....thank you for your patience :)<br />
<br />
== Assigment 1 Solution ==<br />
<br />
I would appreciate if you may notify for any error. [[Media:Assignment 1 Ans.pdf|Assignment 1 Solution]]--[[User:Wongpak|Wongpak]] 08:28, 26 September 2006 (EDT)</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Homework_Assignment_1Talk:06-240/Homework Assignment 12006-09-26T12:24:48Z<p>Wongpak: Assigment 1 Solution</p>
<hr />
<div>What information should be included on the homework assignments besides the answers to the assignment? <br />
Is student name, Math 240, Homework Assignment 1 and date sufficient?<br />
MC<br />
<br />
Yes.<br />
--[[User:Drorbn|Drorbn]] 14:50, 15 September 2006 (EDT)<br />
<br />
== Q4 ==<br />
<br />
i have a question on Q4. for the part a^-1=a^2, if it's true, then a*a^2=1, which makes a=1....but a can't be 1 right?<br />
<br />
I don't see why <math>a*a^2=1</math> implies <math>a=1</math>. --[[User:Drorbn|Drorbn]] 06:16, 22 September 2006 (EDT)<br />
<br />
because <math>b=a^{-1}=a^2</math>, if ab=1, why shouldn't <math>a*a^2=1</math>?<br />
<br />
But what's wrong with that? --[[User:Drorbn|Drorbn]] 17:16, 22 September 2006 (EDT)<br />
<br />
Finally I'm registered.....ok, if <math>a*a^2=1</math>, then a=1,but a field cannot have identical elements.....or can it?.........btw why is your name shown here but mine not?...never used a wiki based site....<br />
<br />
Repeat: I don't see why <math>a*a^2=1</math> implies <math>a=1</math>. --[[User:Drorbn|Drorbn]] 03:24, 23 September 2006 (EDT)<br />
<br />
er....since <math>a*a^2=a^3=1</math>, or am I right about <math>a*a^2=a^3</math>?....and what makes <math>a^3=1</math> except a=1?...sorry but please tell me where I got wrong.........<br />
<br />
Well, OUR very own field has an element <math>a</math> for which <math>a^3=1</math> yet <math>a\neq 1</math>... --[[User:Drorbn|Drorbn]] 17:08, 23 September 2006 (EDT)<br />
<br />
ok.....that's....very...convincing......I'll shut up...<br />
<br />
You seem unhappy, but I actually meant what I said. The equality <math>a^3=1</math> in a general field does not imply the equality <math>a\neq 1</math> --- why would it? After all, <math>a^2=1</math> does not imply <math>a\neq 1</math> either. Here are two examples for fields in which there is an <math>a\neq 1</math> for which <math>a^3=1</math>:<br />
# Our field and our <math>a</math>.<br />
# The complex numbers <math>{\mathbb C}</math> and <math>a=-\frac12+\frac{\sqrt{3}}{2}i</math>.<br />
--[[User:Drorbn|Drorbn]] 17:38, 24 September 2006 (EDT)<br />
<br />
Actually I guessed it had something to do with the field. But this concept is still new to me, I just can't convice myself a is not 1 when a*a*a=1...But that example of complex numbers is indeed very convincing....thank you for your patience :)<br />
<br />
== Assigment 1 Solution ==<br />
<br />
[[Media:Assignment 1 Ans.pdf]]</div>Wongpakhttp://drorbn.net/index.php?title=File:Assignment_1_Ans.pdfFile:Assignment 1 Ans.pdf2006-09-26T12:11:55Z<p>Wongpak: I would appreciate if you may notify me for any correction.</p>
<hr />
<div>I would appreciate if you may notify me for any correction.</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Homework_Assignment_2Talk:06-240/Homework Assignment 22006-09-24T15:29:47Z<p>Wongpak: /* p12: Q1 */</p>
<hr />
<div>===p12: Q1===<br />
(f) An <math>m \times n</math> matrix has ''m'' columns and ''n'' rows. (True or False)<br />
According to the answer at the back, it's FALSE. Can anyone please explain why? Thank you. [[User:Wongpak|Wongpak]] 22:01, 23 September 2006 (EDT)<br />
<br />
By convention it is ''m'' rows and ''n'' columns. MC<br />
<br />
Oh yeah... Thank you so much, MC. [[User:Wongpak|Wongpak]] 11:29, 24 September 2006 (EDT)</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Homework_Assignment_2Talk:06-240/Homework Assignment 22006-09-24T02:01:05Z<p>Wongpak: /* p12: Q1 */</p>
<hr />
<div>===p12: Q1===<br />
(f) An <math>m \times n</math> matrix has ''m'' columns and ''n'' rows. (True or False)<br />
According to the answer at the back, it's FALSE. Can anyone please explain why? Thank you. [[User:Wongpak|Wongpak]] 22:01, 23 September 2006 (EDT)</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Homework_Assignment_2Talk:06-240/Homework Assignment 22006-09-24T01:49:25Z<p>Wongpak: /* p12: Q1 */</p>
<hr />
<div>===p12: Q1===<br />
(f) An <math>m \times n</math> matrix has ''m'' columns and ''n'' rows. (True or False)<br />
According to the answer at the back, it's FALSE. Can anyone please explain why? Thank you.</div>Wongpakhttp://drorbn.net/index.php?title=Talk:06-240/Homework_Assignment_2Talk:06-240/Homework Assignment 22006-09-24T01:47:35Z<p>Wongpak: </p>
<hr />
<div>===p12: Q1===<br />
(f) An <math>m \times n</math> matrix has ''m'' columns and ''n'' rows.<br />
According to the answer at the back, it's FALSE. Can anyone please explain why? Thank you.</div>Wongpak