An Introduction To Random Matrices - NYU Courant

1y ago
4 Views
1 Downloads
3.00 MB
506 Pages
Last View : 1m ago
Last Download : 3m ago
Upload by : Casen Newsome
Transcription

An Introduction to Random Matrices

An Introduction to Random MatricesGreg W. AndersonUniversity of MinnesotaAlice GuionnetENS LyonOfer ZeitouniUniversity of Minnesota and Weizmann Institute of Science

copyright information here

To Meredith, Ben and Naomi

ContentsPrefacepage xiii1Introduction12Real and complex Wigner matrices62.16Real Wigner matrices: traces, moments and combinatorics2.1.1The semicircle distribution, Catalan numbers andDyck paths72.1.2Proof #1 of Wigner’s Theorem 2.1.1102.1.3Proof of Lemma 2.1.6: words and graphs112.1.4Proof of Lemma 2.1.7: sentences and graphs172.1.5Some useful approximations212.1.6Maximal eigenvalues and Füredi–Komlós enumeration 232.1.7Central limit theorems for moments292.2Complex Wigner matrices352.3Concentration for functionals of random matrices andlogarithmic Sobolev inequalities382.3.12.3.22.3.32.4Smoothness properties of linear functions of theempirical measure38Concentration inequalities for independent variablessatisfying logarithmic Sobolev inequalities39Concentration for Wigner-type matrices42Stieltjes transforms and recursionsvii43

viiiC ONTENTS2.52.4.1Gaussian Wigner matrices452.4.2General Wigner matrices47Joint distribution of eigenvalues in the GOE and the GUE2.5.1Definition and preliminary discussion of the GOEand the GUE512.5.2Proof of the joint distribution of eigenvalues542.5.3Selberg’s integral formula and proof of (2.5.4)582.5.4Joint distribution of eigenvalues: alternative formulation65Superposition and decimation relations662.5.52.62.7350Large deviations for random matrices702.6.1Large deviations for the empirical measure712.6.2Large deviations for the top eigenvalue81Bibliographical notes85Hermite polynomials, spacings and limit distributions for the Gaussian ensembles903.13.23.33.4Summary of main results: spacing distributions in the bulkand edge of the spectrum for the Gaussian ensembles903.1.1Limit results for the GUE903.1.2Generalizations: limit formulas for the GOE and GSE 93Hermite polynomials and the GUE943.2.1The GUE and determinantal laws943.2.2Properties of the Hermite polynomials and oscillatorwave-functions99The semicircle law revisited1013.3.1Calculation of moments of L̄N1023.3.2The Harer–Zagier recursion and Ledoux’s argument103Quick introduction to Fredholm determinants3.4.13.4.2107The setting, fundamental estimates and definition ofthe Fredholm determinant107Definition of the Fredholm adjugant, Fredholmresolvent and a fundamental identity110

C ONTENTS3.5Gap probabilities at 0 and proof of Theorem 3.1.11143.5.1The method of Laplace1153.5.2Evaluation of the scaling limit: proof of Lemma3.5.1117A complement: determinantal relations1203.5.33.6Analysis of the sine-kernel1213.6.1General differentiation formulas1213.6.2Derivation of the differential equations: proof ofTheorem 3.6.1126Reduction to Painlevé V1283.6.33.7Edge-scaling: proof of Theorem 3.1.43.7.13.83.93.104132Vague convergence of the largest eigenvalue: proofof Theorem 3.1.41333.7.2Steepest descent: proof of Lemma 3.7.21343.7.3Properties of the Airy functions and proof of Lemma3.7.1139Analysis of the Tracy–Widom distribution and proof ofTheorem 3.1.51423.8.1The first standard moves of the game1443.8.2The wrinkle in the carpet1443.8.3Linkage to Painlevé II146Limiting behavior of the GOE and the GSE1483.9.1Pfaffians and gap probabilities1483.9.2Fredholm representation of gap probabilities1553.9.3Limit calculations1603.9.4Differential equations170Bibliographical notesSome generalities4.1ix181186Joint distribution of eigenvalues in the classical matrixensembles1874.1.1Integration formulas for classical ensembles1874.1.2Manifolds, volume measures and the coarea formula 193

xC ONTENTS4.24.34.44.1.3An integration formula of Weyl type1994.1.4Applications of Weyl’s formula206Determinantal point processes2144.2.1Point processes: basic definitions2154.2.2Determinantal processes2204.2.3Determinantal projections2224.2.4The CLT for determinantal processes2274.2.5Determinantal processes associated with eigenvalues 2284.2.6Translation invariant determinantal processes2324.2.7One-dimensional translation invariant determinantalprocesses2374.2.8Convergence issues2414.2.9Examples243Stochastic analysis for random matrices2484.3.1Dyson’s Brownian motion2494.3.2A dynamical version of Wigner’s Theorem2624.3.3Dynamical central limit theorems2734.3.4Large deviation bounds277Concentration of measure and random matrices4.4.14.4.24.54.65281Concentration inequalities for Hermitian matriceswith independent entries282Concentration inequalities for matrices with dependent entries287Tridiagonal matrix models and the β ensembles3024.5.1Tridiagonal representation of β ensembles3034.5.2Scaling limits at the edge of the spectrum306Bibliographical notes318Free probability3225.1Introduction and main results3235.2Noncommutative laws and noncommutative probability spaces 325

C ONTENTS5.2.15.3xiAlgebraic noncommutative probability spaces andlaws3255.2.2C -probability spaces and the weak*-topology3295.2.3W -probability339spacesFree independence3485.3.1Independence and free independence3485.3.2Free independence and combinatorics3545.3.3Consequence of free independence: free convolution 3595.3.4Free central limit theorem3685.3.5Freeness for unbounded variables3695.4Link with random matrices3745.5Convergence of the operator norm of polynomials of independent GUE matrices394Bibliographical notes4105.6AppendicesABC414Linear algebra preliminaries414A.1Identities and bounds414A.2Perturbations for normal and Hermitian matricesL p -normsA.3Noncommutative matrixA.4Brief review of resultants and discriminants415416417Topological preliminaries418B.1Generalities418B.2Topological vector spaces and weak topologies420B.3Banach and Polish spaces422B.4Some elements of analysis423Probability measures on Polish spaces423C.1Generalities423C.2Weak topology425DBasic notions of large deviations427EThe skew field H of quaternions and matrix theory over F430E.1Matrix terminology over F and factorization theorems 431

xiiC ONTENTSFGHE.2The spectral theorem and key corollaries433E.3A specialized result on projectors434E.4Algebra for curvature computations435Manifolds437F.1Manifolds embedded in Euclidean space438F.2Proof of the coarea formula442F.3Metrics, connections, curvature, Hessians, and theLaplace–Beltrami operator445Appendix on operator algebras450G.1Basic definitions450G.2Spectral properties452G.3States and positivity454G.4von Neumann algebras455G.5Noncommutative functional calculus457Stochastic calculus notions459References465General conventions and notation481Index484

PrefaceThe study of random matrices, and in particular the properties of their eigenvalues, has emerged from the applications, first in data analysis and later as statistical models for heavy-nuclei atoms. Thus, the field of random matrices owes itsexistence to applications. Over the years, however, it became clear that modelsrelated to random matrices play an important role in areas of pure mathematics.Moreover, the tools used in the study of random matrices came themselves fromdifferent and seemingly unrelated branches of mathematics.At this point in time, the topic has evolved enough that the newcomer, especiallyif coming from the field of probability theory, faces a formidable and somewhatconfusing task in trying to access the research literature. Furthermore, the background expected of such a newcomer is diverse, and often has to be supplementedbefore a serious study of random matrices can begin.We believe that many parts of the field of random matrices are now developedenough to enable one to expose the basic ideas in a systematic and coherent way.Indeed, such a treatise, geared toward theoretical physicists, has existed for sometime, in the form of Mehta’s superb book [Meh91]. Our goal in writing this bookhas been to present a rigorous introduction to the basic theory of random matrices, including free probability, that is sufficiently self-contained to be accessible tograduate students in mathematics or related sciences who have mastered probability theory at the graduate level, but have not necessarily been exposed to advancednotions of functional analysis, algebra or geometry. Along the way, enough techniques are introduced that we hope will allow readers to continue their journeyinto the current research literature.This project started as notes for a class on random matrices that two of us (G. A.and O. Z.) taught in the University of Minnesota in the fall of 2003, and notes fora course in the probability summer school in St. Flour taught by A. G. in thexiii

xivP REFACEsummer of 2006. The comments of participants in these courses, and in particularA. Bandyopadhyay, H. Dong, K. Hoffman-Credner, A. Klenke, D. Stanton andP.M. Zamfir, were extremely useful. As these notes evolved, we taught from themagain at the University of Minnesota, the University of California at Berkeley,the Technion and the Weizmann Institute, and received more much appreciatedfeedback from the participants in those courses. Finally, when expanding andrefining these course notes, we have profited from the comments and questions ofmany colleagues. We would like in particular to thank G. Ben Arous, F. BenaychGeorges, P. Biane, P. Deift, A. Dembo, P. Diaconis, U. Haagerup, V. Jones, M.Krishnapur, Y. Peres, R. Pinsky, G. Pisier, B. Rider, D. Shlyakhtenko, B. Solel, A.Soshnikov, R. Speicher, T. Suidan, C. Tracy, B. Virag and D. Voiculescu for theirsuggestions, corrections and patience in answering our questions or explainingtheir work to us. Of course, any remaining mistakes and unclear passages arefully our responsibility.G REG A NDERSONA LICE G UIONNETO FER Z EITOUNIM INNEAPOLIS , M INNESOTALYON , F RANCER EHOVOT, I SRAEL

1IntroductionThis book is concerned with random matrices. Given the ubiquitous role thatmatrices play in mathematics and its application in the sciences and engineering, it seems natural that the evolution of probability theory would eventuallypass through random matrices. The reality, however, has been more complicated(and interesting). Indeed, the study of random matrices, and in particular theproperties of their eigenvalues, has emerged from the applications, first in dataanalysis (in the early days of statistical sciences, going back to Wishart [Wis28]),and later as statistical models for heavy-nuclei atoms, beginning with the seminal work of Wigner [Wig55]. Still motivated by physical applications, at the ablehands of Wigner, Dyson, Mehta and co-workers, a mathematical theory of thespectrum of random matrices began to emerge in the early 1960s, and links withvarious branches of mathematics, including classical analysis and number theory,were established. While much progress was initially achieved using enumerativecombinatorics, gradually, sophisticated and varied mathematical tools were introduced: Fredholm determinants (in the 1960s), diffusion processes (in the 1960s),integrable systems (in the 1980s and early 1990s), and the Riemann–Hilbert problem (in the 1990s) all made their appearance, as well as new tools such as thetheory of free probability (in the 1990s). This wide array of tools, while attesting to the vitality of the field, presents, however, several formidable obstacles tothe newcomer, and even to the expert probabilist. Indeed, while much of the recent research uses sophisticated probabilistic tools, it builds on layers of commonknowledge that, in the aggregate, few people possess.Our goal in this book is to present a rigorous introduction to the basic theoryof random matrices that would be sufficiently self-contained to be accessible tograduate students in mathematics or related sciences who have mastered probability theory at the graduate level, but have not necessarily been exposed to advancednotions of functional analysis, algebra or geometry. With such readers in mind, we1

21. I NTRODUCTIONpresent some background material in the appendices, that novice and expert alikecan consult; most material in the appendices is stated without proof, although thedetails of some specialized computations are provided.Keeping in mind our stated emphasis on accessibility over generality, the bookis essentially divided into two parts. In Chapters 2 and 3, we present a selfcontained analysis of random matrices, quickly focusing on the Gaussian ensembles and culminating in the derivation of the gap probabilities at 0 and the Tracy–Widom law. These chapters can be read with very little background knowledge,and are particularly suitable for an introductory study. In the second part of thebook, Chapters 4 and 5, we use more advanced techniques, requiring more extensive background, to emphasize and generalize certain aspects of the theory, and tointroduce the theory of free probability.So what is a random matrix, and what questions are we about to study? Throughout, let F R or F C, and set β 1 in the former case and β 2 in the latter. (InSection 4.1, we will also consider the case F H, the skew-field of quaternions,see Appendix E for definitions and details.) Let MatN (F) denote the space of N(β )by-N matrices with entries in F, and let HN denote the subset of self-adjointmatrices (i.e., real symmetric if β 1 and Hermitian if β 2). One can always(β )consider the sets MatN (F) and HN , β 1, 2, as submanifolds of an appropriateEuclidean space, and equip it with the induced topology and (Borel) sigma-field.Recall that a probability space is a triple (Ω, F , P) so that F is a sigma-algebraof subsets of Ω and P is a probability measure on (Ω, F ). In that setting, a randommatrix XN is a measurable map from (Ω, F ) to MatN (F).Our main interest is in the eigenvalues of random matrices. Recall that theeigenvalues of a matrix H MatN (F) are the roots of the characteristic polynomialPN (z) det(zIN H), with IN the identity matrix. Therefore, on the (open) setwhere the eigenvalues are all simple, they are smooth functions of the entries ofXN (a more complete discussion can be found in Section 4.1).(β )We will be mostly concerned in this book with self-adjoint matrices H HN ,β 1, 2, in which case the eigenvalues are all real and can be ordered. Thus,(β )for H HN , we let λ1 (H) · · · λN (H) be the eigenvalues of H. A consequence of the perturbation theory of normal matrices (see Lemma A.4) is that theeigenvalues {λi (H)} are continuous functions of H (this also follows from theHoffman–Wielandt theorem, Theorem 2.1.19). In particular, if XN is a randommatrix then the eigenvalues {λi (XN )} are random variables.We present now a guided tour of the book. We begin by considering Wignermatrices in Chapter 2. These are symmetric (or Hermitian) matrices XN whose

1. I NTRODUCTION3entries are independent and identically distributed, except for the symmetry constraints. For x R, let δx denote the Dirac measure at x, that is, the unique probRability measure satisfying f d δx f (x) for all continuous functions on R. LetLN N 1 Ni 1 δλi (XN ) denote the empirical measure of the eigenvalues of XN .Wigner’s Theorem (Theorem 2.1.1) asserts that, under appropriate assumptionson the law of the entries, LN converges (with respect to the weak convergenceof measures) towards a deterministic probability measure, the semicircle law. Wepresent in Chapter 2 several proofs of Wigner’s Theorem. The first, in Section 2.1,involves a combinatorial machinery that is also exploited to yield central limit theorems and estimates on the spectral radius of XN . After first introducing in Section2.3 some useful estimates on the deviation between the empirical measure and itsmean, we define in Section 2.4 the Stieltjes transform of measures and use it togive another quick proof of Wigner’s Theorem.Having discussed techniques valid for entries distributed according to generallaws, we turn attention to special situations involving additional symmetry. Thesimplest of these concerns the Gaussian ensembles, the GOE and GUE, so namedbecause their law is invariant under conjugation by orthogonal (resp., unitary)matrices. The latter extra symmetry is crucial in deriving in Section 2.5 an explicitjoint distribution for the eigenvalues (thus effectively reducing consideration froma problem involving order of N 2 random variables, namely the matrix entries, toone involving only N variables). (The GSE, or Gaussian symplectic ensemble,also shares this property and is discussed briefly.) A large deviations principle forthe empirical distribution, which leads to yet another proof of Wigner’s Theorem,follows in Section 2.6.The expression for the joint density of the eigenvalues in the Gaussian ensembles is the starting point for obtaining local information on the eigenvalues. Thisis the topic of Chapter 3. The bulk of the chapter deals with the GUE, becausein that situation the eigenvalues form a determinantal process. This allows oneto effectively represent the probability that no eigenvalues are present in a setas a Fredholm determinant, a notion that is particularly amenable to asymptoticanalysis. Thus, after representing in Section 3.2 the joint density for the GUE interms of a determinant involving appropriate orthogonal polynomials, the Hermitepolynomials, we develop in Section 3.4 in an elementary way some aspects of thetheory of Fredholm determinants. We then present in Section 3.5 the asymptoticanalysis required in order to study the gap probability at 0, that is the probability that no eigenvalue is present in an interval around the origin. Relevant tools,such as the Laplace method, are developed along the way. Section 3.7 repeats thisanalysis for the edge of the spectrum, introducing along the way the method of

41. I NTRODUCTIONsteepest descent. The link with integrable systems and the Painlevé equations isestablished in Sections 3.6 and 3.8.As mentioned before, the eigenvalues of the GUE are an example of a determinantal process. The other Gaussian ensembles (GOE and GSE) do not fall intothis class, but they do enjoy a structure where certain Pfaffians replace determinants. This leads to a considerably more involved analysis, the details of whichare provided in Section 3.9.Chapter 4 is a hodge-podge of results whose common feature is that they allrequire new tools. We begin in Section 4.1 with a re-derivation of the joint lawof the eigenvalues of the Gaussian ensemble, in a geometric framework based onLie theory. We use this framework to derive the expressions for the joint distribution of eigenvalues of Wishart matrices, of random matrices from the variousunitary groups and of matrices related to random projectors. Section 4.2 studiesin some depth determinantal processes, including their construction, associatedcentral limit theorems, convergence and ergodic properties. Section 4.3 studieswhat happens when in the GUE (or GOE), the Gaussian entries are replaced byBrownian motions. The powerful tools of stochastic analysis can then be broughtto bear and lead to functional laws of large numbers, central limit theorems andlarge deviations. Section 4.4 consists of an in-depth treatment of concentrationtechniques and their application to random matrices; it is a generalization of thediscussion in the short Section 2.3. Finally, in Section 4.5, we study a family oftri-diagonal matrices, parametrized by a parameter β , whose distribution of eigenvalues coincides with that of members of the Gaussian ensembles for β 1, 2, 4.The study of the maximal eigenvalue for this family is linked to the spectrum ofan appropriate random Schrödinger operator.Chapter 5 is devoted to free probability theory, a probability theory for certainnoncommutative variables, equipped with a notion of independence called freeindependence. Invented in the early 1990s, free probability theory has becomea versatile tool for analyzing the laws of noncommutative polynomials in severalrandom matrices, and of the limits of the empirical measure of eigenvalues of suchpolynomials. We develop the necessary preliminaries and definitions in Section5.2, introduce free independence in Section 5.3, and discuss the link with randommatrices in Section 5.4. We conclude the chapter with Section 5.5, in which westudy the convergence of the spectral radius of noncommutative polynomials ofrandom matrices.Each chapter ends with bibliographical notes. These are not meant to be comprehensive, but rather guide the reader through the enormous literature and givesome hint of recent developments. Although we have tried to represent accurately

1. I NTRODUCTION5the historical development of the subject, we have necessarily omitted importantreferences, misrepresented facts, or plainly erred. Our apologies to those authorswhose work we have thus unintentionally slighted.Of course, we have barely scratched the surface of the subject of random matrices. We mention now the most glaring omissions, together with references tosome recent books that cover these topics. We have not discussed the theory of theRiemann–Hilbert problem and its relation to integrable systems, Painlevé equations, asymptotics of orthogonal polynomials and random matrices. The interestedreader is referred to the books [FoIKN06], [Dei99] and [DeG09] for an in-depthtreatment. We do not discuss the relation between asymptotics of random matrices and combinatorial problems – a good summary of these appears in [BaDS09].We barely discuss applications of random matrices, and in particular do not review the recent increase in applications to statistics or communication theory –for a nice introduction to the latter we refer to [TuV04]. We have presented only apartial discussion of ensembles of matrices that possess explicit joint distributionof eigenvalues. For a more complete discussion, including also the case of nonHermitian matrices that are not unitary, we refer the reader to [For05]. Finally,we have not discussed the link between random matrices and number theory; theinterested reader should consult [KaS99] for a taste of that link. We further refer to the bibliographical notes for additional reading, less glaring omissions andreferences.

2Real and complex Wigner matrices2.1 Real Wigner matrices: traces, moments and combinatoricsWe introduce in this section a basic model of random matrices. Nowhere do weattempt to provide the weakest assumptions or sharpest results available. We pointout in the bibliographical notes (Section 2.7) some places where the interestedreader can find finer results.Start with two independent families of independent and identically distributed(i.i.d.) zero mean, real-valued random variables {Zi, j }1 i j and {Yi }1 i , such that2EZ1,2 1 and, for all integers k 1, rk : max E Z1,2 k , E Y1 k .(2.1.1)Consider the (symmetric) N N matrix XN with entries Zi, j / N , if i j,XN ( j, i) XN (i, j) Yi / N ,if i j.(2.1.2)We call such a matrix a Wigner matrix, and if the random variables Zi, j and Yi areGaussian, we use the term Gaussian Wigner matrix. The case of Gaussian Wignermatrices in which EY12 2 is of particular importance, and for reasons that willbecome clearer in Chapter 3, such matrices (rescaled by N) are referred to asGaussian orthogonal ensemble (GOE) matrices.Let λiN denote the (real) eigenvalues of XN , with λ1N λ2N · · · λNN , anddefine the empirical distribution of the eigenvalues as the (random) probabilitymeasure on R defined byLN 1 N δλiN .N i 1Define the semicircle distribution (or law) as the probability distribution σ (x)dx6

2.1 T RACES ,7MOMENTS AND COMBINATORICSon R with density1 p4 x21 x 2 .(2.1.3)2πThe following theorem, contained in [Wig55], can be considered the starting pointof random matrix theory (RMT).σ (x) Theorem 2.1.1 (Wigner) For a Wigner matrix, the empirical measure LN converges weakly, in probability, to the semicircle distribution.In greater detail, Theorem 2.1.1 asserts that for any f Cb (R), and any ε 0,lim P( hLN , f i hσ , f i ε ) 0 .N Remark 2.1.2 The assumption (2.1.1) that rk for all k is not really needed.See Theorem 2.1.21 in Section 2.1.5.We will see many proofs of Wigner’s Theorem 2.1.1. In this section, we givea direct combinatorics-based proof, mimicking the original argument of Wigner.Before doing so, however, we need to discuss some properties of the semicircledistribution.2.1.1 The semicircle distribution, Catalan numbers and Dyck pathsDefine the moments mk : hσ , xk i . Recall the Catalan numbers 2kk(2k)!Ck .k 1(k 1)!k!We now check that, for all integers k 1,m2k Ck ,m2k 1 0 .(2.1.4)Indeed, m2k 1 0 by symmetry, whilem2k Z 2 2x2k σ (x)dx 2 · 22kπZ π /2 π /2sin2k (θ ) cos2 (θ )d θ2 · 22kπZ π /2sin2k (θ )d θ (2k 1)m2k .2 · 22kπ (2k 2)Z π /2sin2k (θ )d θ π /2Hence,m2k π /24(2k 1)m2k 2 ,2k 2(2.1.5)

82. W IGNERMATRICESfrom which, together with m0 1, one concludes (2.1.4).The Catalan numbers possess many combinatorial interpretations. To introducea first one, say that an integer-valued sequence {Sn }0 n ℓ is a Bernoulli walk oflength ℓ if S0 0 and St 1 St 1 for t ℓ 1. Of particular relevance here isthe fact that Ck counts the number of Dyck paths of length 2k, that is, the numberof nonnegative Bernoulli walks of length 2k that terminate at 0. Indeed, let βkdenote the number of such paths. A classical exercise in combinatorics iskLemma 2.1.3 βk Ck 4k . Further, the generating function β̂ (z) : 1 k 1 z βksatisfies, for z 1/4, 1 1 4zβ̂ (z) .(2.1.6)2zProof of Lemma 2.1.3 Let Bk denote the number of Bernoulli walks {Sn } oflength 2k that satisfy S2k 0, and let B̄k denote the number of Bernoulli walks{Sn } of length 2k that satisfy S2k 0 and St 0 for some t 2k. Then, βk Bk B̄k . By reflection at the first hitting of 1, one sees that B̄k equals the numberof Bernoulli walks {Sn } of length 2k that satisfy S2k 2. Hence, 2k2kβk Bk B̄k Ck .kk 1Turning to the evaluation of β̂ (z), considering the first return time to 0 of theBernoulli walk {Sn } gives the relationk βk j β j 1 , k 1 ,βk (2.1.7)j 1with the convention that β0 1. Because the number of Bernoulli walks of length2k is bounded by 4k , one has that βk 4k , and hence the function β̂ (z) is welldefined and analytic for z 1/4. But, substituting (2.1.7),β̂ (z) 1 k j 1k 0 zkk 1 βk j β j 1 z zkk βk j β j ,j 0whileβ̂ (z)2 ′k,k 0′zk k βk βk′ q zq βq ℓβℓ .q 0 ℓ 0Combining the last two equations, one sees thatβ̂ (z) 1 zβ̂ (z)2 ,

2.1 T RACES ,MOMENTS AND COMBINATORICS9from which (2.1.6) follows (using that β̂ (0) 1 to choose the correct branch ofthe square-root). We note in passing that, expanding (2.1.6) in power series in z in a neighborhoodof zero, one gets (for z 1/4)kβ̂ (z) z (2k 2)!2 k 1 k!(k 1)!2z (2k)! k!(k 1)! zk zkCk ,k 0k 0which provides an alternative proof of the fact that βk Ck .Another useful interpretation of the Catalan numbers is that Ck counts the number of rooted planar trees with k edges. (A rooted planar tree is a planar graphwith no cycles, with one distinguished vertex, and with a choice of ordering ateach vertex; the ordering defines a way to “explore” the tree, starting at the root.)It is not hard to check that the Dyck paths of length 2k are in bijection with suchrooted planar trees. See the proof of Lemma 2.1.6 in Section 2.1.3 for a formalconstruction of this bijection.We note in closing that a third interpretation of the Catalan numbers, particularly useful in the context of Chapter 5, is that they count the non-crossing partitions of the ordered set Kk : {1, 2, . . . , k}.Definition 2.1.4 A partition of the set Kk : {1, 2, . . . , k} is called crossing if thereexists a quadruple (a, b, c, d) with 1 a b c d k such that a, c belong toone part while b, d belong to another part. A partition which is not crossing is anon-crossing partition.Non-crossing partitions form a lattice with respect to refinement. A look at Figure 2.1.1 should explain the terminology “non-crossing”: one puts the points1, . . . , k on the circle, and connects each point with the next member of its part(in cyclic order) by an internal path. Then, the partition is non-crossing if this canbe achieved without arcs crossing each other.It is not hard to check that Ck is indeed the number γk of non-crossing partitionsof Kk . To see that, let π be a non-crossing partition of Kk and let j denote thelargest element connected to 1 (with j 1 if the part containing 1 is the set {1}).Then, because π is non-crossing, it induces non-crossing partitions on the sets{1, . . . , j 1} and { j 1, . . . , k}. Therefore, γk kj 1 γk j γ j 1 . With γ1 1, andcomparing with (2.1.7), one sees that βk γk .Exercise 2.1.5 Prove that for z C such that z 6 [ 2, 2], the Stieltjes transform

102. W IGNERMATRICES112266335544Fig. 2.1.1. Non-crossing (left, (1, 4), (2, 3), (5, 6)) and crossing (right, (1, 5), (2, 3), (4, 6))partitions of the set K6 .S(z) of the semicircle law (see Definition 2.4.1) equals Z z z2 41σ (d λ ) S(z) .λ z2zHint: Either use the residue theorem, or relate S(z) to the generating function β̂ (z),see Remark 2.4.2.2.1.2 Proof #1 of Wigner’s Theorem 2.1.1Define the probability distribution L̄N ELN by the relation hL̄N , f i EhLN , f ifor all f Cb , and set mNk : hL̄N , xk i. Theorem 2.1.1 follows from the followingtwo lemmas.Lemma 2.1.6 For every k N,lim mNk mk .N (See (2.1.4) for the definition of mk .)Lemma 2.1.7 For every k N and ε 0, lim P hLN , xk i hL̄N , xk i ε 0 .N Indeed, assume that Lemmas 2.1.6 and 2.1.7 have been proved. To conclude theproof of Theorem 2.1.1, one needs to check that for any bounded continuous function f ,lim hLN , f i hσ , f i ,N in probability.(2.1.8)

2.1 T RACES ,11MOMENTS AND COMBINATORICSToward this end, note first that an application of the Chebyshev inequality yields 1 hL̄N , x2k iP hLN , x k 1 x B i ε EhLN ,

4.3.4 Large deviation bounds 277 4.4 Concentration of measure and random matrices 281 4.4.1 Concentration inequalities for Hermitian matrices with independent entries 282 4.4.2 Concentration inequalities for matrices with depen-dent entries 287 4.5 Tridiagonal matrix models and the βensembles 302 4.5.1 Tridiagonal representation of βensembles 303

Related Documents:

1-minimization as recovery method and on structured random measurement matrices such as the random partial Fourier matrix and partial random circulant matrices. We put emphasis on methods for showing probabilistic condition number estimates for structured random matrices. Among the main too

22 Dense matrices over the Real Double Field using NumPy435 23 Dense matrices over GF(2) using the M4RI library437 24 Dense matrices over F 2 for 2 16 using the M4RIE library447 25 Dense matrices over Z/ Z for 223 using LinBox’s Modular double 455 26 Dense matrices over Z/ Z for 211 using LinBox’s Modular&l

Class XII – NCERT – Maths Chapter 3 - Matrices 3.Matrices . Exercise 3.1 . Question 1: In the matrix 2 5 19 7 5 35 2 12 2 3 1 5 17. A . As the given matrices are equal, their corresponding elements are also equal. Class XII – NCERT – Maths . Chapter 3 - Matrices . 3.Matrices . Comparing the corresponding elements, we get: .

Start by finding out how Python generates random numbers. Type ?random to find out about scipy's random number generators. Try typing 'random.random()' a few times. Try calling it with an integer argument. Use 'hist' (really pylab.hist) to make a histogram of 1000 numbers generated by random.random. Is th

Start by finding out how Python generates random numbers. Type ?random to find out about scipy's random number generators. Try typing 'random.random()' a few times. Try calling it with an integer argument. Use 'hist' (really pylab.hist) to make a histogram of 1000 numbers generated by random.random. Is the distribution Gaussian, uniform, or .

random matrices" or more precisely \products of iid random matrices" is sometimes also called \random walks on linear groups". It began in the middle of the 20th century. It nds its roots in the speculative work of Bellman in [8] who guessed that an analog of classical Probability Theory for \sums of random numbers" might be true for the coe cients

Matrices 2.1 Exact Sample ariance v Co and Correlation Matrices There are eral sev ys a w e w can construct ariance v co and correlation matrices. Consider a matrix P 2 R m n where h eac column ts represen m ations observ of a random ariable v and h eac w ro ations observ at particular time. That is, p ij is the i th ation observ of j random .

Advanced level Speciflcation summary 1. 2 Advanced level Speciflcation summary Qualification objective CIPD Advanced level qualifications provide a depth of knowledge alongside the opportunity to specialise in chosen areas of expertise. Candidates will be able to develop their understanding of organisations and the external context within which HR operates. Using critical analysis, self .