Maxwell equal area rule
editBelow the critical temperature an isotherm of the Van der Waals equation oscillates as shown.
Along the red portion of the isotherm which is unstable; the Van der Waals equation fails to describe real substances in this region because the equation always assumes that the fluid is uniform while between a and c on the isotherm it becomes more stable to be a coexistence of two different phases, a denser phase which we normally call liquid and a sparser phase which we normally call gas. To fix this problem James Clerk Maxwell (1875) replaced the isotherm between a and c with a horizontal line positioned so that the areas of the two hatched regions are equal. The flat line portion of the isotherm now corresponds to a liquid-vapor equilibrium in which part of the system is in a liquid phase and part is in a gas phase. The portions a–d and c–e are interpreted as metastable states of super-heated liquid and super-cooled vapor respectively.[1] The equal area rule can be expressed as:
where is the vapor pressure (flat portion of the curve), is the volume of the pure liquid phase at point a on the diagram, and is the volume of the pure gas phase at point c on the diagram). The sum of these two volumes will equal the total volume V.
Maxwell justified the rule by saying that work done on the system in going from c to b should equal work released on going from a to b. (The area on a PV diagram corresponds to mechanical work). That’s because the change in the free energy function A(T,V) equals the work done during a reversible process the free energy function being a state variable should take on a unique value regardless of path. In particular, the value of A at point b should calculate the same regardless of whether the path came from left or right, or went straight across the horizontal isotherm or around the original Van der Waals isotherm. Maxwell’s argument is not totally convincing since it requires a reversible path through a region of thermodynamic instability. Nevertheless, more subtle arguments based on modern theories of phase equilibrium seem to confirm the Maxwell Equal Area construction and it remains a valid modification of the Van der Waals equation of state.[2]
The Maxwell equal area rule can also be derived from an assumption of equal chemical potential of coexisting liquid and vapour phases.[3] On the isotherm shown in the above plot, points a and c are the only pair of points (at the given temperature) which fulfill the equilibrium condition of having equal pressure, temperature and chemical potential. Define , and as the fixed pressure, temperature and chemical potential for a two-phase system. The independent variables V and T suggest the use of the Helmholtz free energy thermodynamic potential (A), for which the pressure is given by
The integral of the pressure with respect to volume of the Van der Waals equation between the pure liquid phase and the pure gas phase is:
since and since the chemical potential is the same at both limits, it follows that:
which is just Maxwell's equal area rule. The Van der Waals equation may be solved for and as functions of the temperature and the vapor pressure . The vapor pressure may be expressed as:
Since the gas and liquid volumes are functions of and T only, this equation is then solved numerically to obtain as a function of temperature (and number of particles N), which may then be used to determine the gas and liquid volumes.
A Maxwell equal-area construction can be made for any pair of conjugate variables, while holding two other non-conjugate variables constant. For example, on a plot of temperature versus entropy, on a curve of constant pressure, the proper thermodynamic potential to be used is ???. For the two-phase system, the integral of temperature between the pure liquid entropy and the pure gas entropy is given by
since and since the temperature and chemical potential is the same at both limits, it follows that:
which is Maxwell's equal area rule for the temperature-entropy conjugate pair at constant volume.
- ^ Maxwell,J.C. The scientific papers of James Clerk Maxwell Dover 1965(c1890) p424
- ^ Cross, Michael First Order Phase Transitions,http://www.pma.caltech.edu/~mcc/Ph127/b/Lecture3.pdf
- ^ A E Elhassan, R J B Craven, K M de Reuck, The Area Method for pure fluids and an analysis of the two-phase region, Fluid Phase Equilibria 130 (1997) 167-187.
Quantum entropic uncertainty principle
editFor many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period.[1][2][3][4] Other examples include highly bimodal distributions, or unimodal distributions with divergent variance.
A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty.[5] This conjecture, also studied by Hirschman[6] and proven in 1975 by Beckner[7] and by Iwo Bialynicki-Birula and Jerzy Mycielski[8] is that, for two normalized, dimensionless Fourier transform pairs f(a) and g(b) where
- and
the Shannon information entropies
and
are subject to the following constraint:
where the logarithms may be in any base. The probability distribution functions associated with the position wave function ψ(x) and the momentum wave function φ(x) have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by:
where x0 and p0 are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function ψ(x) and the momentum wavefuction φ(p), the above constraint can be written for the corresponding entropies as:
where h is Planck's constant. Depending on one's choice of the x0 p0 product, the expression may be written in many ways. If x0 p0 is chosen to be h, then:
If x0 p0 is chosen to be ћ, then:
If x0 and p0 are chosen to be unity in whatever system of units are being used, then
where h is interpreted as a dimensionless number equal to the value of Planck's constant in the chosen system of units.
The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities[9]
(equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because
A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof).
Entropic uncertainty of the normal distribution |
---|
We demonstrate this method on the ground state of the QHO, which as discussed above saturates the usual uncertainty based on standard deviations. The length scale can be set to whatever is convenient, so we assign
The probability distribution is the normal distribution with Shannon entropy A completely analogous calculation proceeds for the momentum distribution. Choosing a standard momentum of : The entropic uncertainty is therefore the limiting value |
- ^ Cite error: The named reference
CarruthersNieto
was invoked but never defined (see the help page). - ^ Judge, D. (1964), "On the uncertainty relation for angle variables", Il Nuovo Cimento, 31 (2): 332–340, doi:10.1007/BF02733639
- ^ Bouten, M.; Maene, N.; Leuven, P. (1965), "On an uncertainty relation for angle variables", Il Nuovo Cimento, 37 (3): 1119–1125, doi:10.1007/BF02773197
{{citation}}
: CS1 maint: date and year (link) - ^ Louisell, W. H. (1963), "Amplitude and phase uncertainty relations", Physics Letters, 7 (1): 60–61, Bibcode:1963PhL.....7...60L, doi:10.1016/0031-9163(63)90442-6
- ^ DeWitt, B. S.; Graham, N. (1973), The Many-Worlds Interpretation of Quantum Mechanics, Princeton: Princeton University Press, pp. 52–53, ISBN 0-691-08126-3
- ^ Hirschman, I. I., Jr. (1957), "A note on entropy", American Journal of Mathematics, 79 (1): 152–156, doi:10.2307/2372390, JSTOR 2372390.
{{citation}}
: CS1 maint: multiple names: authors list (link) - ^ Beckner, W. (1975), "Inequalities in Fourier analysis", Annals of Mathematics, 102 (6): 159–182, doi:10.2307/1970980, JSTOR 1970980.
- ^ Bialynicki-Birula, I.; Mycielski, J. (1975), "Uncertainty Relations for Information Entropy in Wave Mechanics", Communications in Mathematical Physics, 44 (2): 129, Bibcode:1975CMaPh..44..129B, doi:10.1007/BF01608825
- ^ Chafaï, D. (2003), "Gaussian maximum of entropy and reversed log-Sobolev inequality", Séminaire de Probabilités XXXVI, Lecture Notes in Mathematics, vol. 1801, pp. 194–200, arXiv:math/0102227, doi:10.1007/978-3-540-36107-7_5, ISBN 978-3-540-00072-3