Numerical Multilinear Algebra I

3y ago
42 Views
3 Downloads
524.21 KB
37 Pages
Last View : 22d ago
Last Download : 3m ago
Upload by : Mika Lloyd
Transcription

Numerical Multilinear Algebra ILek-Heng LimUniversity of California, BerkeleyJanuary 5–7, 2009L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20091 / 55

HopePast 50 years, Numerical Linear Algebra played indispensable role inthe statistical analysis of two-way data,the numerical solution of partial differential equations arising fromvector fields,the numerical solution of second-order optimization methods.Next step — development of Numerical Multilinear Algebra forthe statistical analysis of multi-way data,the numerical solution of partial differential equations arising fromtensor fields,the numerical solution of higher-order optimization methods.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20092 / 55

DARPA mathematical challenge eightOne of the twenty three mathematical challenges announced at DARPATech 2007.ProblemBeyond convex optimization: can linear algebra be replaced by algebraicgeometry in a systematic way?Algebraic geometry in a slogan: polynomials are to algebraicgeometry what matrices are to linear algebra.Polynomial f R[x1 , . . . , xn ] of degree d can be expressed as f (x) a0 a 1 x x A2 x A3 (x, x, x) · · · Ad (x, . . . , x).a0 R, a1 Rn , A2 Rn n , A3 Rn n n , . . . , Ad Rn ··· n .Numerical linear algebra: d 2.Numerical multilinear algebra: d 2.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20093 / 55

MotivationWhy multilinear:“Classification of mathematical problems as linear and nonlinear islike classification of the Universe as bananas and non-bananas.”Nonlinear — too general. Multilinear — next natural step.Why numerical:Different from Computer Algebra.Numerical rather than symbolic: floating point operations — cheapand abundant; symbolic operations — expensive.Like other areas in numerical analysis, will entail the approximatesolution of approximate multilinear problems with approximate databut under controllable and rigorous confidence bounds on the errorsinvolved.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20094 / 55

Tensors: mathematician’s definitionU, V , W vector spaces. Think of U V W as the vector space ofall formal linear combinations of terms of the form u v w,Xαu v w,where α R, u U, v V , w W .One condition: decreed to have the multilinear property(αu1 βu2 ) v w αu1 v w βu2 v w,u (αv1 βv2 ) w αu v1 w βu v2 w,u v (αw1 βw2 ) αu v w1 βu v w2 .Up to a choice of bases on U, V , W , A U V W can bel m n .represented by a 3-hypermatrix A Jaijk Kl,m,ni,j,k 1 RL.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20095 / 55

Tensors: physicist’s definition“What are tensors?” “What kind of physical quantities can berepresented by tensors?”Usual answer: if they satisfy some ‘transformation rules’ under achange-of-coordinates.Theorem (Change-of-basis)Two representations A, A0 of A in different bases are related by(L, M, N) · A A0with L, M, N respective change-of-basis matrices (non-singular).Pitfall: tensor fields (roughly, tensor-valued functions on manifolds)often referred to as tensors — stress tensor, piezoelectric tensor,moment-of-inertia tensor, gravitational field tensor, metric tensor,curvature tensor.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20096 / 55

Tensors: data analyst’s definitionl m nData structure: k-array A Jaijk Kl,m,ni,j,k 1 RAlgebraic structure:1Addition/scalar multiplication: for Jbijk K Rl m n , λ R,Jaijk K Jbijk K : Jaijk bijk K and2λJaijk K : Jλaijk K Rl m nMultilinear matrix multiplication: for matricesL [λi 0 i ] Rp l , M [µj 0 j ] Rq m , N [νk 0 k ] Rr n ,(L, M, N) · A : Jci 0 j 0 k 0 K Rp q rwhereci 0 j 0 k 0 : Xli 1Xm Xnj 1k 1λi 0 i µj 0 j νk 0 k aijk .Think of A as 3-dimensional hypermatrix. (L, M, N) · A asmultiplication on ‘3 sides’ by matrices L, M, N.Generalizes to arbitrary order k. If k 2, ie. matrix, then(M, N) · A MAN T .L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20097 / 55

HypermatricesTotally ordered finite sets: [n] {1 2 · · · n}, n N.Vector or n-tuplef : [n] R.If f (i) ai , then f is represented by a [a1 , . . . , an ] Rn .Matrixf : [m] [n] R.m,nIf f (i, j) aij , then f is represented by A [aij ]i,j 1 Rm n .Hypermatrix (order 3)f : [l] [m] [n] R.l m n .If f (i, j, k) aijk , then f is represented by A Jaijk Kl,m,ni,j,k 1 RNormally RX {f : X R}. Ought to be R[n] , R[m] [n] , R[l] [m] [n] .L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20098 / 55

Hypermatrices and tensorsUp to choice of basesa Rn can represent a vector in V (contravariant) or a linearfunctional in V (covariant).A Rm n can represent a bilinear form V W R(contravariant), a bilinear form V W R (covariant), or a linearoperator V W (mixed).A Rl m n can represent trilinear form U V W R(covariant), bilinear operators V W U (mixed), etc.A hypermatrix is the same as a tensor if1we give it coordinates (represent with respect to some bases);2we ignore covariance and contravariance.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 20099 / 55

Basic operation on a hypermatrixA matrix can be multiplied on the left and right: A Rm n ,X Rp m , Y Rq n ,(X , Y ) · A XAY [cαβ ] Rp qwherecαβ Xm,ni,j 1xαi yβj aij .A hypermatrix can be multiplied on three sides: A Jaijk K Rl m n ,X Rp l , Y Rq m , Z Rr n ,(X , Y , Z ) · A Jcαβγ K Rp q rwherecαβγ L.-H. Lim (ICM Lecture)Xl,m,ni,j,k 1xαi yβj zγk aijk .Numerical Multilinear Algebra IJanuary 5–7, 200910 / 55

Basic operation on a hypermatrixCovariant version:A · (X , Y , Z ) : (X , Y , Z ) · A.Gives convenient notations for multilinear functionals and multilinearoperators. For x Rl , y Rm , z Rn ,A(x, y, z) : A · (x, y, z) A(I , y, z) : A · (I , y, z) L.-H. Lim (ICM Lecture)Xl,m,ni,j,k 1Xm,nNumerical Multilinear Algebra Ij,k 1aijk xi yj zk ,aijk yj zk .January 5–7, 200911 / 55

Segre outer productIf U Rl , V Rm , W Rn , Rl Rm Rn may be identified withRl m n if we define byu v w Jui vj wk Kl,m,ni,j,k 1 .A tensor A Rl m n is said to be decomposable if it can be written inthe formA u v wfor some u Rl , v Rm , w Rn .The set of all decomposable tensors is known as the Segre variety inalgebraic geometry. It is a closed set (in both the Euclidean and Zariskisense) as it can be described algebraically:Seg(Rl , Rm , Rn ) {A Rl m n ai1 i2 i3 aj1 j2 j3 ak1 k2 k3 al1 l2 l3 , {iα , jα } {kα , lα }}L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200912 / 55

Symmetric hypermatricesCubical hypermatrix Jaijk K Rn n n is symmetric ifaijk aikj ajik ajki akij akji .Invariant under all permutations σ Sk on indices.Sk (Rn ) denotes set of all order-k symmetric hypermatrices.ExampleHigher order derivatives of multivariate functions.ExampleMoments of a random vector x (X1 , . . . , Xn ):ˆ nmk (x) E (xi1 xi2 · · · xik ) i ,.,i1L.-H. Lim (ICM Lecture)»Zk 1 –nZ···xi1 xi2 · · · xik dµ(xi1 ) · · · dµ(xik )Numerical Multilinear Algebra I.i1 ,.,ik 1January 5–7, 200913 / 55

Symmetric hypermatricesExampleCumulants of a random vector x (X1 , . . . , Xn ):2Xκk (x) 4( 1)p 1 (p 1)!EA1 t···tAp {i1 ,.,ik }„«Qi A1xi„···EQ3« nxi 5.i Api1 ,.,ik 1For n 1, κk (x) for k 1, 2, 3, 4 are the expectation, variance, skewness,and kurtosis.Important in Independent Component Analysis (ICA).L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200914 / 55

Inner products and norms 2 ([n]): a, b Rn , ha, bi a b Pni 1 ai bi . tr(A B)P m,ni,j 1 aij bij .Pl,m,n 2 ([l] [m] [n]): A, B Rl m n , hA, Bi i,j,k 1aijk bijk . 2 ([m] [n]): A, B Rm n ,hA, BiIn general, 2 ([m] [n]) 2 ([m]) 2 ([n]), 2 ([l] [m] [n]) 2 ([l]) 2 ([m]) 2 ([n]).Frobenius normkAk2F Xl,m,na2 .i,j,k 1 ijkNorm topology often more directly relevant to engineeringapplications than Zariski toplogy.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200915 / 55

Other normsLet k·kαi be a norm on Rdi , i 1, . . . , k. Then operator norm ofmultilinear functional A : Rd1 · · · Rdk R is A(x1 , . . . , xk ) .kAkα1 ,.,αk : supkx1 kα1 · · · kxk kαkDeep and important results about such norms in functional analysis.E -norm and G -norm:Xd1 ,.,dkkAkE aj1 ···jk i1 ,.,ik 1andkAkG max{ aj1 ···jk j1 1, . . . , d1 ; . . . ; jk 1, . . . , dk }.Multiplicative on rank-1 tensors:ku v · · · zkE kuk1 kvk1 · · · kzk1 ,ku v · · · zkF kuk2 kvk2 · · · kzk2 ,ku v · · · zkG kuk kvk · · · kzk .L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200916 / 55

Tensor ranks (Hitchcock, 1927)Matrix rank. A Rm n .rank(A) dim(spanR {A 1 , . . . , A n }) dim(spanR {A1 , . . . , Am })P min{r A ri 1 ui vi }(column rank)(row rank)(outer product rank).Multilinear rank. A Rl m n . rank (A) (r1 (A), r2 (A), r3 (A)),r1 (A) dim(spanR {A1 , . . . , Al })r2 (A) dim(spanR {A 1 , . . . , A m })r3 (A) dim(spanR {A 1 , . . . , A n })Outer product rank. A Rl m n .rank (A) min{r A Pri 1 ui v i wi }where u v w : Jui vj wk Kl,m,ni,j,k 1 .L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200917 / 55

Properties of matrix rank1234Rank of A Rm n easy to determine (Gaussian elimination)Best rank-r approximation to A Rm n always exist (Eckart-Youngtheorem)Best rank-r approximation to A Rm n easy to find (singular valuedecomposition)Pick A Rm n at random, then A has full rank with probability 1,ie. rank(A) min{m, n}5rank(A) from a non-orthogonal rank-revealing decomposition (e.g.A L1 DLT2 ) and rank(A) from an orthogonal rank-revealingdecomposition (e.g. A Q1 RQ2T ) are equal6rank(A) is base field independent, ie. same value whether we regardA as an element of Rm n or as an element of Cm nL.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200918 / 55

Properties of outer product rank123456Computing rank (A) for A Rl m n is NP-hard [Håstad 1990]For some A Rl m n , argminrank (B) r kA BkF does not have asolutionWhen argminrank (B) r kA BkF does have a solution, computingthe solution is an NP-complete problem in generalFor some l, m, n, if we sample A Rl m n at random, there is no rsuch that rank (A) r with probability 1An outer product decomposition of A Rl m n with orthogonalityconstraints on X , Y , Z will in general require a sum with more thanrank (A) number of termsrank (A) is base field dependent, ie. value depends on whether weregard A Rl m n or A Cl m nL.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200919 / 55

Properties of multilinear rank1Computing rank (A) for A Rl m n is easy2Solution to argminrank (B) (r1 ,r2 ,r3 ) kA BkF always exist3Solution to argminrank (B) (r1 ,r2 ,r3 ) kA BkF easy to find4Pick A Rl m n at random, then A hasrank (A) (min(l, mn), min(m, ln), min(n, lm))with probability 156If A Rl m n has rank (A) (r1 , r2 , r3 ). Then there exist full-rankmatrices X Rl r1 , Y Rm r2 , Z Rn r3 and core tensorC Rr1 r2 r3 such that A (X , Y , Z ) · C . X , Y , Z may be chosento have orthonormal columnsrank (A) is base field independent, ie. same value whether weregard A Rl m n or A Cl m nL.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200920 / 55

Algebraic computational complexityFor A (aij ), B (bjk ) Rn n ,AB Xni,j,k 1aik bkj Eij Xni,j,k 1ϕik (A)ϕkj (B)Eijn n . Letwhere Eij ei e j RT Xni,j,k 1ϕik ϕkj Eij .O(n2 ε ) algorithm for multiplying two n n matrices gives O(n2 ε )algorithm for solving system of n linear equations [Strassen 1969].Conjecture. log2 (rank (T )) 2 ε.Best known result. O(n2.376 ) [Coppersmith-Winograd 1987;Cohn-Kleinberg-Szegedy-Umans 2005].L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200921 / 55

More tensor ranksFor u Rl , v Rm , w Rn ,l m nu v w : Jui vj wk Kl,m,n.i,j,k 1 ROuter product rank. A Rl m n ,Prank (A) min{r A ri 1 σi ui vi wi ,σi R}.Symmetric outer product rank. A Sk (Rn ),PrankS (A) min{r A ri 1 λi vi vi vi ,λi R}.Nonnegative outer product rank. A Rl m n, Prank (A) min{r A ri 1 δi xi yi zi ,δi R }.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200922 / 55

SVD, EVD, NMF of a matrixSingular value decomposition of A Rm n ,XrA UΣV σi ui vii 1where rank(A) r , U O(m) left singular vectors, V O(n) rightsingular vectors, Σ singular values.Symmetric eigenvalue decomposition of A S2 (Rn ),XrA V ΛV λi vi vi ,i 1where rank(A) r , V O(n) eigenvectors, Λ eigenvalues.n nNonnegative matrix factorization of A R ,XrA X Y δi xi yii 1where rank (A) r , X , Y Rm runit column vectors (in the 1-norm), positive values.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200923 / 55

SVD, EVD, NMF of a hypermatrixOuter product decomposition of A Rl m n ,XrA σi ui vi wii 1where rank (A) r , ui Rl , vi Rm , wi Rn unit vectors, σi R.Symmetric outer product decomposition of A S3 (Rn ),XrA λi vi vi vii 1where rankS (A) r , vi unit vector, λi R.Nonnegative outer product decomposition for hypermatrixl m nA R isXrA δi xi yi zii 1nwhere rank (A) r , xi Rl , yi Rm , zi R unit vectors,δ i R .L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200924 / 55

Best low rank approximation of a matrixGiven A Rm n . Wantargminrank(B) r kA Bk.More precisely, find σi , ui , vi , i 1, . . . , r , that minimizeskA σ1 u1 v1 σ2 u2 v2 · · · σr ur vr k.Theorem (Eckart–Young)Let A UΣV r rank(A), letPrank(A)i 1σi ui vi be singular value decomposition. ForAr : Xri 1σi ui vi .ThenkA Ar kF minrank(B) r kA BkF .No such thing for hypermatrices of order 3 or higher.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200925 / 55

Segre variety and its secant varietiesThe set of all rank-1 hypermatrices is known as the Segre variety inalgebraic geometry.It is a closed set (in both the Euclidean and Zariski sense) as it canbe described algebraically:Seg(Rl , Rm , Rn ) {A Rl m n A u v w} {A Rl m n ai1 i2 i3 aj1 j2 j3 ak1 k2 k3 al1 l2 l3 , {iα , jα } {kα , lα }}Hypermatrices that have rank 1 are elements on the higher secantvarieties of S Seg(Rl , Rm , Rn ).E.g. a hypermatrix has rank 2 if it sits on a secant line through twopoints in S but not on S , rank 3 if it sits on a secant plane throughthree points in S but not on any secant lines, etc.Minor technicality: should really be secant quasiprojective variety.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200926 / 55

Scientific data miningSpectroscopy: measure light absorption/emission of specimen asfunction of energy.Typical specimen contains 1013 to 1016 light absorbing entities orchromophores (molecules, amino acids, etc).Fact (Beer’s Law)A(λ) log(I1 /I0 ) ε(λ)c. A absorbance, I1 /I0 fraction ofintensity of light of wavelength λ that passes through specimen, c concentration of chromophores.Multiple chromophores (f 1, . . . , r ) and wavelengths (i 1, . . . , m)and specimens/experimental conditions (j 1, . . . , n),XrA(λi , sj ) εf (λi )cf (sj ).f 1Bilinear model aka factor analysis: Am n Em r Cr nrank-revealing factorization or, in the presence of noise, low-rankapproximation minkAm n Em r Cr n k.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200927 / 55

Modern data miningText mining is the spectroscopy of documents.Specimens documents.Chromophores terms.Absorbance inverse document frequency: X A(ti ) logχ(fij )/n .jConcentration term frequency: fij .Pj χ(fij )/n fraction of documents containing ti .A Rm n term-document matrix. A QR UΣV T rank-revealingfactorizations.Bilinear model aka vector space model.Due to Gerald Salton and colleagues: SMART (system for themechanical analysis and retrieval of text).L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200928 / 55

Bilinear modelsBilinear models work on ‘two-way’ data:IImeasurements on object i (genomes, chemical samples, images,webpages, consumers, etc) yield a vector ai Rn where n number offeatures of i;collection of m such objects, A [a1 , . . . , am ] may be regarded as anm-by-n matrix, e.g. gene microarray matrices in bioinformatics,terms documents matrices in text mining, facial images individuals matrices in computer vision.Various matrix techniques may be applied to extract usefulinformation: QR, EVD, SVD, NMF, CUR, compressed sensingtechniques, etc.Examples: vector space model, factor analysis, principal componentanalysis, latent semantic indexing, PageRank, EigenFaces.Some problems: factor indeterminacy — A XY rank-revealingfactorization not unique; unnatural for k-way data when k 2.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200929 / 55

Ubiquity of multiway dataBatch data: batch time variableTime-series analysis: time variable lagComputer vision: people view illumination expression pixelBioinformatics: gene microarray oxidative stressPhylogenetics: codon codon codonAnalytical chemistry: sample elution time wavelengthAtmospheric science: location variable time observationPsychometrics: individual variable timeSensory analysis: sample attribute judgeMarketing: product product consumerFact (Inevitable consequence of technological advancement)Increasingly sophisticated instruments, sensor devices, data collecting andexperimental methodologies lead to increasingly complex data.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200930 / 55

Fundamental problem of multiway data analysisA hypermatrix, symmetric hypermatrix, or nonnegative hypermatrix.Solveargminrank(B) r kA Bk.rank may be outer product rank, multilinear rank, symmetric rank (forsymmetric hypermatrix), or nonnegative rank (nonnegativehypermatrix).ExampleGiven A Rd1 d2 d3 , find ui , vi , wi , i 1, . . . , r , that minimizeskA u1 v1 w1 u2 v2 w2 · · · ur vr zr kor C Rr1 r2 r3 and U Rd1 r1 , V Rd2 r2 , W Rd3 r3 , that minimizeskA (U, V , W ) · Ck.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200931 / 55

Fundamental problem of multiway data analysisExampleGiven A Sk (Cn ), find ui , i 1, . . . , r , that minimizes k kkA u k1 u2 · · · ur kor C Rr1 r2 r3 and U Rn ri that minimizeskA (U, U, U) · Ck.L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200932 / 55

Outer product decomposition in spectroscopyApplication to fluorescence spectral analysis by [Bro; 1997].Specimens with a number of pure substances in differentconcentrationIIIof ith sampleaijk fluorescence emission intensity at wavelength λemjexcited with light at wavelength λexk .Get 3-way data A Jaijk K Rl m n .Get outer product decomposition of AA x1 y1 z1 · · · xr yr zr .Get the true chemical factors responsible for the data.IIIIr : number of pure substances in the mixtures,xα (x1α , . . . , xlα ): relative concentrations of αth substance inspecimens 1, . . . , l,yα (y1α , . . . , ymα ): excitation spectrum of αth substance,zα (z1α , . . . , znα ): emission spectrum of αth substance.Noisy case: find best rank-r approximation (candecomp/parafac).L.-H. Lim (ICM Lecture)Numerical Multilinear Algebra IJanuary 5–7, 200933 / 55

Uniqueness of tensor decompositionsM Rm n , spark(M) size of minimal linearly dependent subset ofcolumn vectors [Donoho, Elad; 2003].Theor

the numerical solution of second-order optimization methods. Next step development of Numerical Multilinear Algebra for the statistical analysis of multi-way data, the numerical solution of partial di erential equations arising from tensor elds, the numerical solution of higher-order optimization methods.

Related Documents:

I Multilinear Algebra 7 1 Linear and multilinear functions 9 . II Calculus of di erential forms 63 . Let V be a nite-dimensional real vector space. The set of all linear functions on V will be denoted by V. Proposition 1.1. V is a

Texts of Wow Rosh Hashana II 5780 - Congregation Shearith Israel, Atlanta Georgia Wow ׳ג ׳א:׳א תישארב (א) ׃ץרֶָֽאָּהָּ תאֵֵ֥וְּ םִימִַׁ֖שַָּה תאֵֵ֥ םיקִִ֑לֹאֱ ארָָּ֣ Îָּ תישִִׁ֖ארֵ Îְּ(ב) חַורְָּ֣ו ם

Robert Gerver, Ph.D. North Shore High School 450 Glen Cove Avenue Glen Head, NY 11545 gerverr@northshoreschools.org Rob has been teaching at . Algebra 1 Financial Algebra Geometry Algebra 2 Algebra 1 Geometry Financial Algebra Algebra 2 Algebra 1 Geometry Algebra 2 Financial Algebra ! Concurrently with Geometry, Algebra 2, or Precalculus

Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition .

This course will provide a rigorous introduction to abstract algebra, including group theory and linear algebra. Topics include: 1. Set theory. Formalization of Z,Q,R,C. 2. Linear algebra. Vector spaces and transformations over Rand C. Other ground fields. Eigenvectors. Jordan form. 3. Multilinear algebra. Inner products, quadraticforms .

So you can help us find X Teacher/Class Room Pre-Algebra C-20 Mrs. Hernandez Pre-Algebra C-14 . Kalscheur Accelerated Math C-15 Mrs. Khan Honors Algebra 2 Honors Geometry A-21 Mrs. King Math 7 Algebra 1 Honors Algebra 1 C-19 Mrs. Looft Honors Algebra C-16 Mr. Marsh Algebra 1 Honors Geometry A-24 Mrs. Powers Honors Pre-Algebra C-18 Mr. Sellaro .

super-linear if for all c 0 and all su ciently large n we have f(n) cn. Next, we give a de nition of an n-multilinear map. We view the groups G1 and G2 as multiplicative groups. Definition 2.1. We say that a map e : Gn 1!G2 is an n-multilin

North & West Sutherland LHP – Minutes 1/3/07 1 NORTH & WEST SUTHERLAND LOCAL HEALTH CARE PARTNERSHIP Minutes of the meeting held on Thursday 1st March 2007 at 12:00 noon in the Ben Loyal Hotel, Tongue PRESENT: Dr Andreas Herfurt Lead Clinician Dr Alan Belbin GP Durness Dr Cameron Stark Public Health Consultant Dr Moray Fraser CHP Medical Director Mrs Georgia Haire CHP Assistant General .