Statistical Thermodynamics

1y ago
13 Views
2 Downloads
1.17 MB
45 Pages
Last View : 17d ago
Last Download : 3m ago
Upload by : Warren Adams
Transcription

1Statistical ThermodynamicsProfessor Dmitry GaraninStatistical physicsMay 17, 2021I.PREFACEStatistical physics considers systems of a large number of entities (particles) such as atoms, molecules, spins, etc. Forthese system it is impossible and even does not make sense to study the full microscopic dynamics. The only relevantinformation is, say, how many atoms have a particular energy, then one can calculate the observable thermodynamicvalues. That is, one has to know the distribution function of the particles over energies that defines the macroscopicproperties. This gives the name statistical physics and defines the scope of this subject.The approach outlined above can be used both at and off equilibrium. The branch of physics studying nonequilibrium situations is called physical kinetics. In this course, we study only statistical physics at equilibrium. Itturns out that at equilibrium the energy distribution function has an explicit general form and the only problem isto calculate the observables. The term statistical mechanics means the same as statistical physics. One can call itstatistical thermodynamics as well.The formalism of statistical thermodynamics can be developed for both classical and quantum systems. The resultingenergy distribution and calculating observables is simpler in the classical case. However, the very formulation of themethod is more transparent within the quantum mechanical formalism. In addition, the absolute value of the entropy,including its correct value at T 0, can only be obtained in the quantum case. To avoid double work, we will consideronly quantum statistical thermodynamics in this course, limiting ourselves to systems without interaction. The moregeneral quantum results will recover their classical forms in the classical limit.II.MICROSTATES AND MACROSTATESFrom quantum mechanics follows that the states of the system do not change continuously (like in classical physics)but are quantized. There is a huge number of discrete quantum states i with corresponding energy values εi being themain parameter characterizing these states. In the absence of interaction, each particle has its own set of quantumstates which it can occupy. For identical particles these sets of states are identical. One can think of boxes i intowhich particles are placed, Ni particles in the ith box. The particles can be distributed over the boxes in a number ofdifferent ways, corresponding to different microstates, in which the state i of each particle is specified. The informationcontained in the microstates is excessive, and the only meaningful information is provided by the numbers Ni thatdefine the distribution of particles over their quantum states. These numbers Ni specify what in statistical physics iscalled macrostate. If these numbers are known, the energy and other quantities of the system can be found. It shouldbe noted that also the statistical macrostate contains more information than the macroscopic physical quantities thatfollow from it, as a distribution contains more information than an average over it.Each macrostate k, specified by the numbers Ni , can be realized by a number wk of microstates, the so-calledthermodynamic probability. The latter is typically a large number, unlike the usual probability that changes between 0and 1. Redistributing the particles over the states i while keeping the same values of Ni generates different microstateswithin the same macrostate. The basic assumption of statistical mechanics is the equidistrubution over microstates.That is, each microstate within a macrostate is equally probable for occupation. Macrostates having a larger wk aremore likely to be realized. As we will see, for large systems, thermodynamic probabilities of different macrostates varyin a wide range and there is the state with the largest value of w that wins over all other macrostates.If the number of quantum states i is finite, the total number of microstates can be written asXΩ wk ,(1)k

2the sum rule for thermodynamic probabilities. For an isolated system the number of particles N and the energy Uare conserved, thus the numbers Ni satisfy the constraintsXNi N,(2)iXNi ε i U(3)ithat limit the variety of the allowed macrostates k.III.TWO-STATE PARTICLES (COIN TOSSING)A tossed coin can land in two positions: Head up or tail up. Considering the coin as a particle, one can say thatthis particle has two “quantum” states, 1 corresponding to the head and 2 corresponding to the tail. If N coins aretossed, this can be considered as a system of N particles with two quantum states each. The microstates of the systemare specified by the states occupied by each coin. As each coin has 2 states, there are totalΩ 2N(4)microstates. The macrostates of this system are defined by the numbers of particles in each state, N1 and N2 . Thesetwo numbers satisfy the constraint condition (2), i.e., N1 N2 N. Thus one can take, say, N1 as the number klabeling macrostates. The number of microstates in one macrostate (that is, the number of different microstates thatbelong to the same macrostate) is given by the binomial distribution NN! .(5)wN1 N1 !(N N1 )!N1This formula can be derived as follows. We have to pick N1 particles to be in the state 1, all others will be in the state2. How many ways are there to do this? The first “0” particle can be picked in N ways, the second one can be pickedin N 1 ways since we can choose of N 1 particles only. The third “1” particle can be picked in N 2 differentways etc. Thus one obtains the number of different ways to pick the particles isN (N 1) (N 2) . . . (N N1 1) N!,(N N1 )!(6)where the factorial is defined byN ! N (N 1) . . . 2 1,0! 1.(7)The recurrent definition of the factorial isN ! N (N 1)!,0! 1.(8)The expression in Eq. (6) is not yet the thermodynamical probability wN1 because it contains multiple counting ofthe same microstates. The realizations, in which N1 “1” particles are picked in different orders, have been counted asdifferent microstates, whereas they are the same microstate. To correct for the multiple counting, one has to divideby the number of permutations N1 ! of the N1 particles that yields Eq. (5). One can check that the condition (1) issatisfied,NXwN1 N1 0NXN1 0N! 2N .N1 !(N N1 )!(9)The thermodynamic probability wN1 has a maximum at N1 N/2, half of the coins head and half of the coins tail.This macrostate is the most probable state. Indeed, as for an individual coin the probabilities to land head up andtail up are both equal to 0.5, this is what we expect. For large N the maximum of wN1 on N1 becomes sharp.To prove that N1 N/2 is the maximum of wN1 , one can rewrite Eq. (5) in terms of the new variable p N1 N/2aswN1 N!.(N/2 p)!(N/2 p)!(10)

3FIG. 1: The binomial distribution for an ensemble of N two-state systems becomes narrow and peaked at p N1 N/2 0.One can see that wp is symmetric around N1 N/2, i.e., p 0. Using Eq. (8), one obtainswN/2 1(N/2)!(N/2)!N/2 1,wN/2(N/2 1)!(N/2 1)!N/2 1(11)one can see that N1 N/2 is indeed the maximum of wN1 . The binomial distribution is shown in Fig. 1 for threedifferent valus of N . As the argument, the variable p/pmax (N1 N/2) / (N/2) is used so that one can put all thedata on the same plot. One can see that in the limit of large N the binomial distribution becomes narrow and centeredat p 0, that is, N1 N/2. This practically means that if the coin is tossed many times, significant deviations fromthe 50:50 relation between the numbers of heads and tails will be extremely rare.IV.STIRLING FORMULA AND THERMODYNAMIC PROBABILITY AT LARGE NAnalysis of expressions with large factorials is simplified by the Stirling formulaN! In many important cases, the prefactorEq. (10) becomes 2πNNe Nr q(12)2πN is irrelevant, as we will see below. With the Stirling formula substituted, wN1 p.N2πN (N/e)pN/2 pN/2 p2π (N/2 p) [(N/2 p) /e]2π (N/2 p) [(N/2 p) /e]2qπN1 11 2p 2NNNN/2 pN/2 p 2p 2 (N/2 p)(N/2 p)N1 wN/2 2p N/2 pN1 2p N/2 pN,(13)wherewN/2 r2 N2πN(14)

4is the maximal value of the thermodynamic probability. Eq. (13) can be expanded for p N. Since p enters boththe bases and the exponents, one has to be careful and expand the logarithm of wN1 rather than wN1 itself. Thesquare root term in Eq. (13) can be discarded as it gives a negligible contribution of order p2 /N 2 . One obtains N2pN2pln wN1 p ln 1 p ln 1 ln wN/2 2N2N " 2 # " 2 #NN2p 1 2p2p 1 2p p p ln wN/2 2N2 N2N2 N2p2p22p2p2 p ln wN/2 p NNNN2p2 ln wN/2 N(15)and thusOne can see that wN1 p N . That is, wN1 2p2 wN1 wN/2 exp .(16)N becomes very small if p & N that for large N does not violate the applicability conditionis small in the main part of the interval 0 N1 N and is sharpy peaked near N1 N/2.V.MANY-STATE PARTICLESThe results obtained in Sec. III for two-state particles can be generalized for n-state particles. We are looking forthe number of ways to distribute N particles over n boxes so that there are Ni particles in ith box. That is, we lookfor the number of microstates in the macrostate described by the numbers Ni . The result is given byw N!N!. QnN1 !N2 ! . . . Nn !i 1 Ni !(17)This formula can be obtained by using Eq. (5) successively. The number of ways to put N1 particles in box 1 and theother N N1 in other boxes is given by Eq. (5). Then the number of ways to put N2 particles in box 2 is given bya similar formula with N N N1 (there are only N N1 particles after N1 particles have been put in box 1) andN1 N2 . These numbers of ways should multiply. Then one considers box 3 etc. until the last box n. The resultingnumber of microstates isw (N N1 )!(N N1 N2 )!N! N1 !(N N1 )! N2 !(N N1 N2 )! N3 !(N N1 N2 N3 )!(Nn 2 Nn 1 Nn )! (Nn 1 Nn )!Nn !. .Nn 2 ! (Nn 1 Nn )!Nn 1 !Nn !Nn !0!(18)In this formula, all numerators except for the first one and all second terms in the denominators cancel each other, sothat Eq. (17) follows.VI.THERMODYNAMIC PROBABILITY AND ENTROPYWe have seen that different macrostates k can be realized by largely varying numbers of microstates wk . For largesystems, N 1, the difference between different thermodynamic probabilities wk is tremendous, and there is a sharpmaximum of wk at some value of k kmax . The main postulate of statistical physics is that in measurements on largesystems, only the most probable macrostate, satisfying the constraints of Eqs. (2) and (3), makes a contribution. Forinstance, a macrostate of an ideal gas with all molecules in one half of the container is much less probable than themacrostate with the molecules equally distributed over the whole container. (Like the state with all coins landed headup is much less probable than the state with half of the coins landed head up and the other half tail up). For thisreason, if the initial state is all molecules in one half of the container, then in the course of evolution the system willcome to the most probably state with the molecules equally distributed over the whole container, and will stay in thisstate forever.

5We have seen in thermodynamics that an isolated system, initially in a non-equilibrium state, evolves to the equilibrium state characterized by the maximal entropy. On this way one comes to the idea that entropy S and thermodynamicprobability w should be related, one being a monotonic function of the other. The form of this function can be foundif one notices that entropy is additive while the thermodynamic probability is miltiplicative. If a system consists oftwo subsystems that weakly interact with each other (that is almost always the case as the intersubsystem interactionis limited to the small region near the interface between them), then S S1 S2 and w w1 w2 . If one chooses (L.Boltzmann)S kB ln w,(19)then S S1 S2 and w w1 w2 are in accord since ln (w1 w2 ) ln w1 ln w2 . In Eq. (19) kB is the Boltzmannconstant,kB 1.38 10 23 J K 1 .(20)It will be shown that the statistically defined entropy above coincides with the thermodynamic entropy at equilibrium.On the other hand, statistical entropy is well defined for nonequilibrium states as well, where as the thermodynamicentropy is usually undefined off equilibrium.VII.BOLTZMANN DISTRIBUTION AND CONNECTION WITH THERMODYNAMICSIn this section we obtain the distribution of particles over energy levels i as the most probable macrostate bymaximizing its thermodynamic probability w. We label quantum states by the index i and use Eq. (17). The taskis to find the maximum of w with respect to all Ni that satisfy the constraints (2) and (3). Practically it is moreconvenient to maximize ln w than w itself. Using the method of Lagrange multipliers, one searches for the maximumof the target functionXXΦ(N1 , N2 , · · · , Nn ) ln w αNi βεi Ni .(21)iiHere α and β are Lagrange multipliers with (arbitrary) signs chosen anticipating the final result. The maximumsatisfies Φ 0, Nii 1, 2, . . . .(22)As we are interested in the behavior of macroscopic systems with Ni 1, the factorials can be simplified with thehelp of Eq. (12) that takes the form (23)ln N ! N ln N N ln 2πN .Neglecting the relatively small last term in this expression, one obtainsXXXXΦ Nj ln Nj Nj αNj βεj Nj . ln N ! jjj(24)jThe first term here is a constant and it won’t contribute to the derivatives Φ/ Ni . In the latter, the only contributioncomes from the terms with j i in the above expression. One obtains the equations Φ ln Ni α βεi 0 Ni(25)Ni eα βεi ,(26)that yieldthe Boltzmann distribution with yet undefined Lagrange multipliers α and β. The latter can be found from Eqs. (2)and (3) in terms of N and U. Summing Eq. (26) over i, one obtainsN eα Z,α ln (N/Z) ,(27)

6whereZ Xe βεi(28)iis the so-called partition function (German Zustandssumme) that plays a major role in statistical physics. Then,eliminating α from Eq. (26) yieldsNi N βεi,eZ(29)the Boltzmann distribution. After that for the internal energy U one obtainsU Xiεi Ni N X βεiN Zεi e Z iZ β(30)orU N ln Z. β(31)This formula implicitly defines the Lagrange multiplier β as a function of U.The statistical entropy, Eq. (19), within the Stirling approximation becomes!XS kB ln w kB N ln N Ni ln Ni .(32)iInserting here Eq. (26) and α from Eq. (27), one obtainsXS N ln N Ni (α βεi ) N ln N αN βU N ln Z βU.kBiThe statistical entropy depends only on the parameter β. Its differential is given by ln ZdUdSdβ N U βdβ βdU.dS dβ βdβ(33)(34)Comparing this with the thermodynamic relation dU T dS at V 0, one identifiesβ 1.kB T(35)Now, with the help of dT /dβ kB T 2 , one can represent the internal energy, Eq. (31), via the derivative withrespect to T asU N kB T 2 ln Z. T(36)U.T(37)Eq. (33) becomesS N kB ln Z From here one obtains the statistical formula for the free energyF U T S N kB T ln Z.(38)One can see that the partition function Z contains the complete information of the system’s thermodynamics sinceother quantities such as pressure P follow from F. In particular, one can check F/ T S.

7VIII.QUANTUM STATES AND ENERGY LEVELSA.Stationary Schrödinger equationIn the formalism of quantum mechanics, quantized states and their energies E are the solutions of the eigenvalueproblem for a matrix or for a differential operator. In the latter case the problem is formulated as the so-calledstationary Schrödinger equationĤΨ EΨ,(39)where Ψ Ψ (r) is the complex function called wave function. The physical interpretation of the wave function is that2 Ψ (r) gives the probability for a particle to be found near the space point r. As above, the number of measurementsdN of the toral N measurements in which the particle is found in the elementary volume d3 r dxdydz around r isgiven by2dN N Ψ (r) d3 r.(40)The wave function satisfies the normalization condition 2d3 r Ψ (r) .1 (41)The operator Ĥ in Eq. (39) is the so-called Hamilton operator or Hamiltonian. For one particle, it is the sum ofkinetic and potential energiesĤ p̂2 U (r),2m(42)where the classical momentum p is replaced by the operatorp̂ i . r(43)The Schrödinger equation can be formulated both for single particles and the systems of particles. In this course wewill restrict ourselves to single particles. In this case the notation ε will be used for single-particle energy levels insteadof E. One can see that Eq. (39) is a second-order linear differential equation. It is an ordinary differential equationin one dimension and partial differential equation in two and more dimensions. If there is a potential energy, this isa linear differential equation with variale coefficients that can be solved analytically only in special cases. In the caseU 0 this is a linear differential equation with constant coefficients that is easy to solve analytically.An important component of the quantum formalism is boundary conditions for the wave function. In particular, fora particle inside a box with rigid walls the boundary condition is Ψ 0 at the walls, so that Ψ (r) joins smoothly with2the value Ψ (r) 0 everywhere outside the box. In this case it is also guaranteed that Ψ (r) is integrable and Ψ canbe normalized according to Eq. (41). It turns out that the solution of Eq. (39) that satisfies the boundary conditionsexists only for a discrete set of E values that are called eigenvalues. The corresponding Ψ are calles eigenfunctions,and all together is called eigenstates. Eigenvalue problems, both for matrices and differential operators, were known inmathematics before the advent of quantum mechanics. The creators of quantum mechanics, mainly Schrödinger andHeisenberg, realised that this mathematical formalism can accurately describe quantization of energy levels observed inexperiments. Whereas Schrödinger formlated his famous Schrödinger equation, Heisenberg made a major contributioninto description of quantum systems with matrices.B.Energy levels of a particle in a boxAs an illustration, consider a particle in a one-dimensional rigid box, 0 x L. In this case the momentum becomesp̂ i ddx(44)and Eq. (39) takes the form 2 d 2Ψ(x) εΨ(x)2m dx2(45)

8and can be represented asd2Ψ(x) k 2 Ψ(x) 0,dx2k2 2mε. 2(46)The solution of this equation satisfying the boundary conditions Ψ(0) Ψ(L) 0 has the formΨν (x) A sin (kν x) ,kν πν,Lν 1, 2, 3, . . .(47)where eigenstates are labeled by the index ν. The constant A following from Eq. (41) is A eigenvalues are given byεν p2/L. The energy 2 kν2π 2 2 ν 2. 2m2mL2(48)One can see the the energy ε is quandratic in the momentum p k (de Broglie relation), as it should be, butthe energy levels are discrete because of the quantization. For very large boxes, L , the energy levels becomequasicontinuous. The lowest-energy level with ν 1 is called the ground state.For a three-dimensional box with sides Lx , Ly , and Lz one has to solve the Schrödinger equation 2 dd2d2 2 Ψ(x) εΨ(x)(49)2m dx2dy 2dz 2with similar boundary conditions. The solution factorizes and has the form Ψν x ,ν y ,ν z (x, y, z) A sin (kν x x) sin kν y x sin (kν z x) ,kα πνα,Lαν α 1, 2, 3, . . . ,(50)where α x, y, z. The energy levels are 2 kx2 ky2 kz2 2 k 2ε 2m2m (51)and parametrized by the three quantum numbers ν α . The ground state is (ν x , ν y , ν z ) (1, 1, 1). One can order thestates in increasing ε and number them by the index j, the ground state being j 1. If Lx Ly Lz L, thenεν x ,ν y ,ν z π 2 2ν 2x ν 2y ν 2z22mL(52)and the same value of εj can be realized for different sets of ν x , ν y , and ν z , for instance, (1,5,12), (5,12,1), etc. Thenumber of different sets of (ν x , ν y , ν z ) having the same εj is called degeneracy and is denoted as gj . States with ν x ν y ν z have gj 1 and they are called non-degenerate. If only two of the numbers ν x , ν y , and ν z coincide, thedegeneracy is gj 3. If all numbers are different, gj 3! 6. If one sums over the energy levels parametrized by j,one has to multiply the summands by the corresponding degeneracies.C.Density of statesFor systems of a large size and thus very finely quantized states, one can define the density of states ρ(ε) as thenumber of energy levels dn in the interval dε, that is,dn ρ(ε)dε.(53)It is convenient to start calculation of ρ(ε) by introducing the number of states dn in the “elementary volume”dν x dν y dν z , considering ν x , ν y , and ν z as continuous. The result obviously isdn dν x dν y dν z ,(54)that is, the corresponding density of states is 1. Now one can rewrite the same number of states in terms of the wavevector k using Eq. (51)dn Vdkx dky dkz ,π3(55)

9where V Lx Ly Lz is the volume of the box. After that one can go over to the number of states within the shell dk,as was done for the distribution function of the molecules over velocities. Taking into account that kx , ky , and kz areall positive, this shell is not a complete spherical shell but 1/8 of it. Thusdn V 4π 2k dk.π3 8(56)The last step is to change the variable from k to ε using Eq. (51). Withrr2m dk1 2m 12m2 k ε,k 2 ε, 2dε2 2 ε(57)one obtains Eq. (53) withρ(ε) IX. V(2π)22m 2 3/2 ε.(58)STATISTICAL THERMODYNAMICS OF THE IDEAL GASIn this section we demonstrate how the results for the ideal gas, previously obtained within the molecular theory,follow from the more general framework of statistical physics in the preceding section. Consider an ideal gas in asufficiently large container. In this case the energy levels of the system, see Eq. (51), are quantized so finely that onecan introduce the density of states ρ (ε) defined by Eq. (53). The number of particles in quantum states within theenergy interval dε is the product of the number of particles in one state N (ε) and the number of states dnε in thisenergy interval. With N (ε) given by Eq. (29) one obtainsN βεeρ (ε) dε.ZdNε N (ε)dnε (59)For finely quantized levels one can replace summation in Eq. (28) and similar formulas by integration asX . dερ (ε) . . .(60)iThus for the partition function (28) one has Z dερ (ε) e βε .(61)For quantum particles in a rigid box with the help of Eq. (58) one obtainsZ V(2π)22m 2 3/2 dε εe βε 0 V(2π)22m 2 3/2 π2β 3/2 VmkB T2π 2 3/2.(62)Using the classical relation ε mv 2 /2 and thus dε mvdv, one can obtain the formula for the number of particles inthe speed interval dvdNv N f (v)dv,(63)where the speed distribution function f (v) is given by 3/2 3/2 r12π 2V1 βε2mmeρ (ε) mv v mv e βεf (v) 22ZV mkB T 2(2π) 3/2 mmv 22 4πv exp .2πkB T2kB T(64)This result coincides with the Maxwell distribution function obtained earlier from the molecular theory of gases. ThePlank’s constant that links to quantum mechanics disappeared from the final result.

10The internal energy of the ideal gas is its kinetic energyU Nε̄,(65)ε̄ mv̄ 2 /2 being the average kinetic energy of an atom. The latter can be calculated with the help of the speeddistribution function above, as was done in the molecular theory of gases. The result has the formε̄ fkB T,2(66)where in our case f 3 corresponding to three translational degrees of freedom. The same result can be obtainedfrom Eq. (31)" 3/2 # ln Z3 3 13mU N N Nln Vln β N N kB T.(67) β β2π 2 β2 β2 β2After that the known result for the heat capacity CV ( U/ T )V follows.The pressure P is defined by the thermodynamic formula FP . V T(68)With the help of Eqs. (38) and (62) one obtainsP N kB T ln VN kB T ln Z N kB T V VV(69)that amounts to the equation of state of the ideal gas P V N kB T.X.STATISTICAL THERMODYNAMICS OF HARMONIC OSCILLATORSConsider an ensemble of N identical harmonic oscillators, each of them described by the HamiltonianĤ p̂2kx2 .2m2(70)Here the momentum operator p̂ is given by Eq. (44) and k is the spring constant. This theoretical model can describe,for instance, vibrational degrees of freedom of diatomic molecules. In this case x is the elongation of the chemicalbond between the two atoms, relative to the equilibrium bond length. The stationary Schrödinger equation (39) fora harmonic oscillator becomes 2 d 2kx2 Ψ(x) εΨ(x).(71)2m dx22The boundary conditions for this equation require that Ψ( ) 0 and the integral in Eq. (41) converges. In contrastto Eq. (45), this is a linear differential equation with a variable coefficient. Such differential equations in general donot have solutions in terms of known functions. In some cases the solution can be expressed through special functionssuch as hypergeometrical, Bessel functions, etc. Solution of Eq. (71) can be found but this task belongs to quantummechanics courses. The main result that we need here is that the energy eigenvalues of the harmonic oscillator havethe form 1εν ω 0 ν ,ν 0, 1, 2, . . . ,(72)2pwhere ω 0 k/m is the frequency of oscillations. The energy level with ν 0 is the ground state. The ground-stateenergy ε0 ω 0 /2 is not zero, as would be the case for a classical oscillator. This quantum ground-state energy iscalled zero-point energy. It is irrelevant in the calculation of the heat capacity of an ensemble of harmonic oscillators.The partition function of a harmonic oscillator isZ Xν 0e βεν e β ω0 /2 Xν 0e β ω0 ν e β ω0 /212 β ω /2 , β ω β ω/20001 esinh (β ω 0 /2)e e(73)

11where the result for the geometrical progression1 x x2 x3 . . . (1 x) 1,x 1(74)was used. The hyperbolic functions are defined byex e x2cosh(x)1coth(x) .sinh(x)tanh(x)ex e x,2sinh(x)tanh(x) ,cosh(x)cosh(x) sinh(x) (75)We will be using the formulassinh(x)0 cosh(x),cosh(x)0 sinh(x)(76)andsinh(x) x, x 1,ex /2, x 1tanh(x) x, x 11, x 1.(77)The internal (average) energy of the ensemble of oscillators is given by Eq. (31). This yields ln sinh (β ω 0 /2)N sinh (β ω 0 /2) ω 0 ln ZU N N Ncoth β βsinh (β ω 0 /2) β2 β ω 02 (78)orU N ω 0coth2 ω 02kB T .(79)Another way of writing the internal energy is U N ω 01eβ ω0 1 12 ,(80)where the constant term with 1/2 is the zero-point energy. The limiting low- and high-temperature cases of thisformula are N ω 0 /2, kB T ω 0U (81) N kB T, kB T ω 0 .In the low-temperature case, almost all oscillators are in their ground states, since e β ω0 1. Thus the main termthat contributes into the partition function in Eq. (73) is ν 0. Correspondingly, U is the sum of ground-stateenergies of all oscillators.At high temperatures, the previously obtained classical result is reproduced. The Planck’s constant that links toquantum mechanics disappears. In this case, e β ω0 is only slightly smaller than one, so that very many different ncontribute to Z in Eq. (73). In this case one can replace summation by integration, as was done above for the particlein a potential box. In that case one also obtained the classical results.One can see that the crossover between the two limiting cases corresponds to kB T ω 0 . In the high-temperaturelimit kB T ω 0 , many low-lying energy levels are populated. The top populated level ν can be estimates fromkB T ω 0 ν , so thatν kB T1. ω 0Probability to find an oscillator in the states with ν ν is exponentially small.The heat capacity is defined by 2 dU ω 01 ω 0 ω 0 /(2kB T )C N N kB.dT22kB T 2sinh [ ω 0 /(2kB T )]sinh2 [ ω 0 /(2kB T )](82)(83)This formula has limiting cases(C N kB ω 0kB T 2 0exp k ω, kB T ω 0TBN kB ,kB T ω 0 .(84)

12FIG. 2: Heat capacity of harmonic oscillators.One can see that at high temperatures the heat capacity can be written asC fN kB ,2(85)where the effective number of degrees of freedom for an oscillator is f 2. The explanation of the additional factor2 is that the oscillator has not only the kinetic, but also the potential energy, and the average values of these twoenergies are the same. Thus the total amount of energy in a vibrational degree of freedom doubles with respect to thetranslational and rotational degrees of freedom.At low temperatures the vibrational heat capacity above becomes exponentially small. One says that vibrationaldegrees of freedom are getting frozen out at low temperatures.The heat capacity of an ensemble of harmonic oscillators in the whole temperature range, Eq. (83), is plotted inFig. 2.The average quantum number of an oscillator is given byn hνi 1 X βεννe.Z ν 0(86)Using Eq. (72), one can calculate this sum as followsn 1 X 11 X 1 βεν1 1 XZ1 1 Z1U1 ν e βεν e εν e βεν .Z ν 0 2Z ν 0 2 ω 0 Z ν 02Z ω 0 Z β2N ω 02(87)Finally with the help of Eq. (80), one findsn 11. eβ ω0 1exp [ ω 0 /(kB T )] 1(88)This is the Bose-Einstein distribution that will be considered later. The meaning of it is the following. Consideringthe oscillator as a “box” or “mode”, one can ask what is the average number of quanta (that is, “particles” orquasiparticles) in this box at a given temperature. The latter is given by the formula above.Substituting Eq. (88) into Eq. (80), one obtains a nice formula 1(89)U N ω 0 n 2

13resembling Eq. (72). At low temperatures, kB T ω 0 , the average quantum number n becomes exponentially small.This means that the oscillator is predominantly in its ground state, ν 0. At high temperatures, kB T ω 0 , Eq.(88) yieldsn kB T ω 0(90)that has the same order of magnitude as the top populated quantum level number ν given by Eq. (82).The density of states ρ (ε) for the harmonic oscillator can be easily found. The number of levels dnν in the intervaldν isdnν dν(91)(that is, there is only one level in the interval dν 1). Then, changing the variables with the help of Eq. (72), onefinds the number of levels dnε in the interval dε as 1dνdεdnε dε ρ (ε) dε,(92)dε dεdνwhereρ (ε) 1. ω 0(93)To conclude this section, let us reproduce the classical high-temperature results by a simpler method. At hightem

to calculate the observables. The term statistical mechanics means the same as statistical physics. One can call it statistical thermodynamics as well. The formalism of statistical thermodynamics can be developed for both classical and quantum systems. The resulting energy distribution and calculating observables is simpler in the classical case.

Related Documents:

1. Introduction Methodology of Thermodynamics and Statistical Mechanics Thermodynamics study of the relationships between macroscopic properties – Volume, pressure, compressibility, Statistical Mechanics (Statistical Thermodynamics) how the various macroscopic properties arise as a consequence of the microscopic nature of the system .

1.4 Second Law of Thermodynamics 1.5 Ideal Gas Readings: M.J. Moran and H.N. Shapiro, Fundamentals of Engineering Thermodynamics,3rd ed., John Wiley & Sons, Inc., or Other thermodynamics texts 1.1 Introduction 1.1.1 Thermodynamics Thermodynamics is the science devoted to the study of energy, its transformations, and its

Reversible and Irreversible processes First law of Thermodynamics Second law of Thermodynamics Entropy Thermodynamic Potential Third Law of Thermodynamics Phase diagram 84/120 Equivalent second law of thermodynamics W Q 1 1 for any heat engine. Heat cannot completely be converted to mechanical work. Second Law can be formulated as: (A .

An Introduction to Statistical Mechanics and Thermodynamics. This page intentionally left blank . An Introduction to Statistical Mechanics and Thermodynamics Robert H. Swendsen 1. 3 Great Clarendon Street, Oxford ox2 6dp Oxford University Press is a department of the University of Oxford.

o First Law of Thermodynamics o Second Law of Thermodynamics o Prerequisites for courses on Fluid Mechanics, Heat and Mass Transfer, Gas Dynamics, Power and Refrigeration Cycles, HVAC, Combustion, Acoustics, Statistical Thermodynamics, High level application of these topics to the

- Systems out of equilibrium - Irreversible Thermodynamics: limited! - Statistical Physics: kinetic theory, powerful, complex! Statistical Mechanics allows calculating with an excellent accuracy the properties of systems containing 1023 atoms! II) Introduction to the methods of Statistical Mechanics 1) Definitions:

R1: "Introduction to Modern Statistical Mechanics" by David Chandler (Oxford University Press). Ch. 3-8. Additional textbooks are available at the Kline Science and Engineering library include: R2: "Introduction to Statistical Thermodynamics" by T.L. Hill (Addison Wesley), R3: "Statistical Mechanics" by D. McQuarrie (Harper & Row),

ANSI A300 (Part 6)-2005 Transplanting, ANSI Z60.1- 2004 critical root zone: The minimum volume of roots necessary for maintenance of tree health and stability. ANSI A300 (Part 5)-2005 Management . development impacts: Site development and building construction related actions that damage trees directly, such as severing roots and branches or indirectly, such as soil compaction. ANSI A300 (Part .