Linear Algebra ...stating simple things in convoluted way gives mathematical rigor... matrix M rows x N columns = MxN matrix 1 x N matrix is a ROW vector and N x 1 is a COLUMN vector addition and subtraction: add coresponding elements both have to have the same MxN dimension operation is comunitative A+B = B+A A = a b B= e f c d g h A+B = a+e b+f c+h d+h multiply by scalar: multiply every element by scalar value -1 * A = -a -b -c -d multiply by matrix: multiply row from first by column from second and add results A needs to have same number of columns as B has rows operation is NOT comunitative A*B != B*A multiply row * column is called vector DOT PRODUCT A = a b B= e f c d g h A * B = Ar1*Bc1 Ar1*Bc2 Ar2*Bc2 Ar2*Bc2 A * B = a*e+b*g a*f+b*h c*e+d*g c*f+d*h the multiply gives a matrix that has the size: Arows x Bcolumns A(axb) * B(bxc) = C(axc) multiplying in the other order, B*A, only works if a=c i.e., A has the same number of rows as B columns identity matrix: multiply matrix by Identity gives same matrix A * IdentityMatrix = A if both are square (have same RxC dimensions) then it communitative, A*I = I*A Identity Matrix has 1's on the diagonal and 0's elsewhere A = a b I = 1 0 c d 0 1 matrix inversion: A^-1 [The inverse of A] times A = Identity matrix A*A^-1 = I and A^-1*A = I (only) for 2x2 matrix... A = a b c d use DETERMINANT: s notation |A| like "absolute value" to get determinant of A multiply diagonals and subtract: det = |A| = a*d - b*c Note: if the determinant is 0 then the matrix cannot be inverted and is called "singular" scalar multiply 1/determinant by ADJOINT or TRANSPOSE "diagonally flipped" matrix A^-1 = 1/|A| * d -b -c a example: B = 3 -4 2 -5 |B| = (3*-5) + (-4*2) = -7 B^-1 = -1/7 * -5 4 = 5/7 -4/7 -2 3 2/7 -3/7 for larger matrix use MINORs (sub-matrices) to get Determinants in "matrix of minors" -- for 3x3, cross off row&column being calculated for and use remaining values to calc determinant then put that det in matrix at calculation position. THEN change the sign of every other element row by row like this: + - + - + - + - + This makes the "matrix of co-factors" Then get Adjugate of A "Adj(A)" with the "transpose" of co-factors switch row num and column num (diagonals stay the same): cf = a b c tr = a d g d e f b e h g h i c f i So ... A^-1 = 1/|A| * adj(A) to get |A| multiply rows in A with co-factor rows and add which is the same as multiplying element with it's minor matrix example: A = 1 0 1 cf = 1 1 -2 det= -1 0 2 1 1 0 -1 1 1 1 -2 -1 2 Gauss Jordan Elimination -- augment the matrix with the Identity matrix 1 0 1 | 1 0 0 0 2 1 | 0 1 0 1 1 1 | 0 0 1 like solving systems of linear equations try to make the left side look like the identity matrix but do same thing to each sub-matrix, using elementary row operations: * replace row with values * constant * swap any two rows * add or sub row from another, with optional multiplier essentially each step is a matrix multiply each matrix is an "elimination matrix" and the combination of the elimination matrices is the Inverse see image: gaussJordan.jpg replace row3 with row3+row1 to get 0 in 1,3 position swap row2 and row3 subtract 2*row2 from row3 replace row1 with row1-row3 Matrices to solve a system of linear equations system of linear equations: finding where lines intersect i.e. looking for x,y that satisfy both equations 3x + 2y = 7 y = -3/2x + 7/2 slope = 3/2 intercept = 7/2 -6x + 6y = 6 y = 1x + 1 slope = 1 intercept = 1 make a matrix A with the parameters, when multiplied by x,y = results 3 2 * x = 7 -6 6 y 6 A X = b so the inverse of the matrix A times the results = x,y X = A^-1 * b Remember that matrix multiply ORDER MATTERS... If determinant is 0, can't invert matrix... line slopes are parallel and have different intercepts, so no intersection vector space spanned - all the vectors we can get by adding two vectors Linear Combination of Vectors :: SPAN addition of vectors multiplied by optional scalar "scales" Dot Product of Vectors -- A . B (not A * B -- cross product) a1 . b1 a2 b2 = a1*b1 + a2*b2 + a3*b3 == scalar value a3 b3 if equal to 0 then the vectors are orthogonal (90deg to each other) Length of Vector || A || = sqrt( a1^2 + a2^2 + ... an^2 ) therefore ||A||^2 = (A dot A) Angle between Vectors by law of cosines : C^2 = A^2 + B^2 + 2*A*B*cos(t) so ||A-B||^2 = ||B||^2 + ||A||^2 - 2 * ||A|| * ||B|| * cos(t) so cos(t) = (A dot B) / ||A|| * ||B|| Cross Product of Vectors only defined in R^3 or greater vector 1. ignore row1 and cross multiply row2 x row3 (det of lower set) 2. ignore row2 and cross multiply row3 x row1 (det of outside set) 3. ignore row3 and cross multiply row1 x row2 (det of upper set) a1 x b1 a2*b3 - a3*b2 a2 x b2 = a3*b1 - a1*b3 a2 x b3 a1*b2 - a2*b1 Cross Product of Matrix and Vector Vector must have as many ROWs and Matrix has COLUMNs gives another Vector with that Matrix number of ROWs sum of Matrix A row entries A1-n * Vector X row value X1-n A11*X1 + A12*X2 ... A21*X1 + A22*X2 ... is the dot product of transposed matrix rows with vector use matrix columns as vectors, multiply each times "scalar" from Vrow at Mcolumn and sum Eigenvalue and Eigenvector Eigenvector is a vector that is only scaled by a transformation rather than changing direction (other than being reversed) Eigenvalue is the multiplier applied during the transformation basically the orthogonal set of vectors to a matrix transform if M*v = lamda * v (a matrix times vector equals a scalar times the same vector) then v is an eigenvector of A and lambda is an eigenvalue (scaling factor) looking for non-zero vectors because 0 is trivial member of null space Transformation Matrix T(v) = A*v = lambda*v so Zero-vector = lambda*v - A*v v = Identity-matrix * v (lamda * I - A) * v = Z A*v = lambda * v for non-zero vectors iff det (lambda*I - A ) = 0 Example for a 2x2 Matrix: A = 1 2 4 3 det ( Lambda * I - A) = 0 det ( L 0 - 1 2 ) = 0 0 L 4 3 doing subtraction L-1 0-2 0-4 L-3 det = (L-1 * L-3) - (-2 * -4) = 0 equals: L^2 - 4L - 5 = 0 -- characteristic polynomial solving: Lamda = 5 or -1 -- assuming non-zero eigenvector given Av = Lv Zero-vector = LIv - Av Z = (LI - A) * v -- Eigenspace (all E-vectors) -- which is null-space of LI-A for L= 5, the Null-space is 5 0 - 1 2 = -4 -2 0 5 4 3 -4 2 to get null space need "reduced row echelon" of above...magic of some kind: copy the top row and subtract top from bottom row: -4 -2 0 0 then divide by 4 gives 1 -1/2 * v1 = 0 0 0 v2 0 therefore v1 - 1/2*v2 so eigenspace is the span of vector = 1/2 1