Get Fine-structure Constant essential facts below. View Videos or join the Fine-structure Constant discussion. Add Fine-structure Constant to your PopFlock.com topic list for future reference or share this resource on social media.
Dimensionless number that quantifies the strength of the electromagnetic interaction
When the other constants (c, h and e) have defined values, the definition reflects the relationship between ? and the permeability of free space µ0, which equals µ0 = 2hα/ce2.
In the 2019 redefinition of SI base units, 4? × is the value for µ0 based upon an average of all then-existing measurements of the fine structure constant.
This value for ? gives µ0 = 4? × , 3.6 standard deviations away from its old defined value, but with the mean differing from the old value by only 0.54 parts per billion.
For reasons of convenience, historically the value of the reciprocal of the fine-structure constant is often specified. The 2018 CODATA recommended value is given by
?-1 = .
While the value of ? can be estimated from the values of the constants appearing in any of its definitions, the theory of quantum electrodynamics (QED) provides a way to measure ? directly using the quantum Hall effect or the anomalous magnetic moment of the electron. Other methods include the AC Josephson effect and photon recoil in atom interferometry. There is general agreement for the value of ?, as measured by these different methods. The preferred methods in 2019 are measurements of electron anomalous magnetic moments and of photon recoil in atom interferometry. The theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant ? (the magnetic moment of the electron is also referred to as "Landé g-factor" and symbolized as g). The most precise value of ? obtained experimentally (as of 2012) is based on a measurement of g using a one-electron so-called "quantum cyclotron" apparatus, together with a calculation via the theory of QED that involved tenth-order Feynman diagrams:
?-1 = .
This measurement of ? has a relative standard uncertainty of . This value and uncertainty are about the same as the latest experimental results. Further refinement of this work were published by the end of 2020, giving the value
?-1 = .
with a relative accuracy of 81 parts per trillion.
The fine-structure constant, ?, has several physical interpretations. ? is:
The optical conductivity of graphene for visible frequencies is theoretically given by ?G0/4, and as a result its light absorption and transmission properties can be expressed in terms of the fine structure constant alone. The absorption value for normal-incident light on graphene in vacuum would then be given by /(1 + /2)2 or 2.24%, and the transmission by 1/(1 + /2)2 or 97.75% (experimentally observed to be between 97.6% and 97.8%).
The fine-structure constant gives the maximum positive charge of an atomic nucleus that will allow a stable electron-orbit around it within the Bohr model (element feynmanium). For an electron orbiting an atomic nucleus with atomic number Z, mv2/r = 1/40Ze2/r2. The Heisenberg uncertainty principle momentum/position uncertainty relationship of such an electron is just mvr = ?. The relativistic limiting value for v is c, and so the limiting value for Z is the reciprocal of the fine-structure constant, 137.
The magnetic moment of the electron indicates that the charge is circulating at a radius rQ with the velocity of light. It generates the radiation energy mec2 and has an angular momentum L = 1 ? = rQmec. The field energy of the stationary Coulomb field is mec2 = e2/4??0re and defines the classical electron radius re. These values inserted into the definition of alpha yields ? = re/rQ. It compares the dynamic structure of the electron with the classical static assumption.
Alpha is related to the probability that an electron will emit or absorb a photon.
Given two hypothetical point particles each of Planck mass and elementary charge, separated by any distance, ? is the ratio of their electrostatic repulsive force to their gravitational attractive force.
In quantum electrodynamics, the more thorough quantum field theory underlying the electromagnetic coupling, the renormalization group dictates how the strength of the electromagnetic interaction grows logarithmically as the relevant energy scale increases. The value of the fine-structure constant ? is linked to the observed value of this coupling associated with the energy scale of the electron mass: the electron is a lower bound for this energy scale, because it (and the positron) is the lightest charged object whose quantum loops can contribute to the running. Therefore, 1/137.036 is the asymptotic value of the fine-structure constant at zero energy.
At higher energies, such as the scale of the Z boson, about 90 GeV, one measures  an effective , instead.
As the energy scale increases, the strength of the electromagnetic interaction in the Standard Model approaches that of the other two fundamental interactions, a feature important for grand unification theories. If quantum electrodynamics were an exact theory, the fine-structure constant would actually diverge at an energy known as the Landau pole--this fact undermines the consistency of quantum electrodynamics beyond perturbative expansions.
Based on the precise measurement of the hydrogen atom spectrum by Michelson and Morley in 1887,Arnold Sommerfeld extended the Bohr model to include elliptical orbits and relativistic dependence of mass on velocity. He introduced a term for the fine-structure constant in 1916. The first physical interpretation of the fine-structure constant ? was as the ratio of the velocity of the electron in the first circular orbit of the relativistic Bohr atom to the speed of light in the vacuum. Equivalently, it was the quotient between the minimum angular momentum allowed by relativity for a closed orbit, and the minimum angular momentum allowed for it by quantum mechanics. It appears naturally in Sommerfeld's analysis, and determines the size of the splitting or fine-structure of the hydrogenic spectral lines. This constant was not seen as significant until Paul Dirac's linear relativistic wave equation in 1928, which gave the exact fine structure formula.
With the development of quantum electrodynamics (QED) the significance of ? has broadened from a spectroscopic phenomenon to a general coupling constant for the electromagnetic field, determining the strength of the interaction between electrons and photons. The term ?/2? is engraved on the tombstone of one of the pioneers of QED, Julian Schwinger, referring to his calculation of the anomalous magnetic dipole moment.
The CODATA values in the above table are computed by averaging other measurements; they are not independent experiments.
Physicists have pondered whether the fine-structure constant is in fact constant, or whether its value differs by location and over time. A varying ? has been proposed as a way of solving problems in cosmology and astrophysics.String theory and other proposals for going beyond the Standard Model of particle physics have led to theoretical interest in whether the accepted physical constants (not just ?) actually vary.
In the experiments below, ?? represents the change in ? over time, which can be computed by ?prev - ?now. If the fine-structure constant really is a constant, then any experiment should show that
or as close to zero as experiment can measure. Any value far away from zero would indicate that ? does change over time. So far, most experimental data is consistent with ? being constant.
Improved technology at the dawn of the 21st century made it possible to probe the value of ? at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in ?. Using the Keck telescopes and a data set of 128 quasars at redshifts0.5 < z < 3, Webb et al. found that their spectra were consistent with a slight increase in ? over the last 10-12 billion years. Specifically, they found that
In other words, they measured the value to be somewhere between and . This is a very small value, but the error bars do not actually include zero. This result either indicates that ? is not constant or that there is experimental error unaccounted for.
However, in 2007 simple flaws were identified in the analysis method of Chand et al., discrediting those results.
King et al. have used Markov chain Monte Carlo methods to investigate the algorithm used by the UNSW group to determine ??/? from the quasar spectra, and have found that the algorithm appears to produce correct uncertainties and maximum likelihood estimates for ??/? for particular models. This suggests that the statistical uncertainties and best estimate for ??/? stated by Webb et al. and Murphy et al. are robust.
Lamoreaux and Torgerson analyzed data from the Oklonatural nuclear fission reactor in 2004, and concluded that ? has changed in the past 2 billion years by 45 parts per billion. They claimed that this finding was "probably accurate to within 20%". Accuracy is dependent on estimates of impurities and temperature in the natural reactor. These conclusions have to be verified.
In 2007, Khatri and Wandelt of the University of Illinois at Urbana-Champaign realized that the 21 cm hyperfine transition in neutral hydrogen of the early universe leaves a unique absorption line imprint in the cosmic microwave background radiation. They proposed using this effect to measure the value of ? during the epoch before the formation of the first stars. In principle, this technique provides enough information to measure a variation of 1 part in (4 orders of magnitude better than the current quasar constraints). However, the constraint which can be placed on ? is strongly dependent upon effective integration time, going as t-1⁄2. The European LOFARradio telescope would only be able to constrain ??/? to about 0.3%. The collecting area required to constrain ??/? to the current level of quasar constraints is on the order of 100 square kilometers, which is economically impracticable at the present time.
Present rate of change
In 2008, Rosenband et al. used the frequency ratio of Al+ and Hg+ in single-ion optical atomic clocks to place a very stringent constraint on the present-time temporal variation of ?, namely /? = per year. Note that any present day null constraint on the time variation of alpha does not necessarily rule out time variation in the past. Indeed, some theories that predict a variable fine-structure constant also predict that the value of the fine-structure constant should become practically fixed in its value once the universe enters its current dark energy-dominated epoch.
Spatial variation - Australian dipole
In September 2010, researchers from Australia said they had identified a dipole-like structure in the variation of the fine-structure constant across the observable universe. They used data on quasars obtained by the Very Large Telescope, combined with the previous data obtained by Webb at the Keck telescopes. The fine-structure constant appears to have been larger by one part in 100,000 in the direction of the southern hemisphere constellation Ara, 10 billion years ago. Similarly, the constant appeared to have been smaller by a similar fraction in the northern direction, 10 billion years ago.
In September and October 2010, after Webb's released research, physicists Chad Orzel and Sean M. Carroll suggested various approaches of how Webb's observations may be wrong. Orzel argues that the study may contain wrong data due to subtle differences in the two telescopes, in which one of the telescopes the data set was slightly high and on the other slightly low, so that they cancel each other out when they overlapped. He finds it suspicious that the sources showing the greatest changes are all observed by one telescope, with the region observed by both telescopes aligning so well with the sources where no effect is observed. Carroll suggested a totally different approach; he looks at the fine-structure constant as a scalar field and claims that if the telescopes are correct and the fine-structure constant varies smoothly over the universe, then the scalar field must have a very small mass. However, previous research has shown that the mass is not likely to be extremely small. Both of these scientists' early criticisms point to the fact that different techniques are needed to confirm or contradict the results, as Webb, et al., also concluded in their study.
In October 2011, Webb et al. reported a variation in ? dependent on both redshift and spatial direction. They report "the combined data set fits a spatial dipole" with an increase in ? with redshift in one direction and a decrease in the other. "Independent VLT and Keck samples give consistent dipole directions and amplitudes...."[clarification needed]
In 2020, the team verified their previous results, finding a dipole structure in the strength of the electromagnetic force using the most distant quasar measurements. Observations of the quasar of the universe at only 0.8 billion years old with AI analysis method employed on the Very Large Telescope (VLT) found a spatial variation preferred over a no-variation model at the level.
The anthropic principle is a controversial argument of why the fine-structure constant has the value it does: stable matter, and therefore life and intelligent beings, could not exist if its value were much different. For instance, were ? to change by 4%, stellar fusion would not produce carbon, so that carbon-based life would be impossible. If ? were greater than 0.1, stellar fusion would be impossible, and no place in the universe would be warm enough for life as we know it.
Numerological explanations and multiverse theory
As a dimensionless constant which does not seem to be directly related to any mathematical constant, the fine-structure constant has long fascinated physicists.
Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the universe. This led him in 1929 to conjecture that the reciprocal of the fine-structure constant was not approximately but precisely the integer137. By the 1940s experimental values for 1/? deviated sufficiently from 137 to refute Eddington's arguments.
The fine-structure constant so intrigued physicist Wolfgang Pauli that he collaborated with psychoanalyst Carl Jung in a quest to understand its significance. Similarly, Max Born believed that if the value of ? differed, the universe would degenerate, and thus that ? = 1/137 is a law of nature.
There is a most profound and beautiful question associated with the observed coupling constant, e - the amplitude for a real electron to emit or absorb a real photon. It is a simple number that has been experimentally determined to be close to 0.08542455. (My physicist friends won't recognize this number, because they like to remember it as the inverse of its square: about 137.03597 with an uncertainty of about 2 in the last decimal place. It has been a mystery ever since it was discovered more than fifty years ago, and all good theoretical physicists put this number up on their wall and worry about it.)
Immediately you would like to know where this number for a coupling comes from: is it related to pi or perhaps to the base of natural logarithms? Nobody knows. It's one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by humans. You might say the "hand of God" wrote that number, and "we don't know how He pushed His pencil." We know what kind of a dance to do experimentally to measure this number very accurately, but we don't know what kind of dance to do on the computer to make this number come out - without putting it in secretly!
Conversely, statistician I. J. Good argued that a numerological explanation would only be acceptable if it could be based on a good theory that is not yet known but "exists" in the sense of a Platonic Ideal.
Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. However, no numerological explanation has ever been accepted by the physics community.
The mystery about ? is actually a double mystery. The first mystery - the origin of its numerical value ? ? 1/137 - has been recognized and discussed for decades. The second mystery - the range of its domain - is generally unrecognized.
^Léo Morel, Zhibin Yao, Pierre Cladé & Saïda Guellati-Khélifa, Determination of the fine-structure constant with an accuracy of 81 parts per trillion, Nature, vol. 588, p.61-65(2020), DOI: https://doi.org/10.1038/s41586-020-2964-7
^Michelson, Albert A.; Morley, Edward W. (1887). "Method of making the wave-length of sodium light the actual and practical standard of length". The American Journal of Science. 3rd series. 34 (204): 427-430. From p. 430: "Among other substances [that were] tried in the preliminary experiments, were thallium, lithium, and hydrogen. ... It may be noted, that in [the] case of the red hydrogen line, the interference phenomena disappeared at about 15,000 wave-lengths, and again at about 45,000 wave-lengths: so that the red hydrogen line must be a double line with the components about one-sixtieth as distant as the sodium lines."
^Sommerfeld, A. (1916). "Zur Quantentheorie der Spektrallinien" [On the quantum theory of spectral lines]. Annalen der Physik. 4th series (in German). 51 (17): 1-94. Bibcode:1916AnP...356....1S. doi:10.1002/andp.19163561702. From p.91:"Wir fügen den Bohrschen Gleichungen (46) und (47) die charakteristische Konstante unserer Feinstrukturen (49) ? = 2?e2/ch hinzu, die zugleich mit der Kenntnis des Wasserstoffdubletts oder des Heliumtripletts in §10 oder irgend einer analogen Struktur bekannt ist." (We add, to Bohr's equations (46) and (47), the characteristic constant of our fine structures (49) ? = 2?e2/ch , which is known at once from knowledge of the hydrogen doublet or the helium triplet in §10 or any analogous structure.)
^King, J. A.; Mortlock, D. J.; Webb, J. K.; Murphy, M. T. (2009). "Markov Chain Monte Carlo methods applied to measuring the fine structure constant from quasar spectroscopy ". Memorie della Societa Astronomica Italiana. 80: 864. arXiv:0910.2699. Bibcode:2009MmSAI..80..864K.
^I. J. Good (1990). "A Quantal Hypothesis for Hadrons and the Judging of Physical Numerology". In G. R. Grimmett; D. J. A. Welsh (eds.). Disorder in Physical Systems. Oxford University Press. p. 141. ISBN978-0-19-853215-6. I. J. Good: There have been a few examples of numerology that have led to theories that transformed society: see the mention of Kirchhoff and Balmer in Good (1962, p. 316) ... and one can well include Kepler on account of his third law. It would be fair enough to say that numerology was the origin of the theories of electromagnetism, quantum mechanics, gravitation.... So I intend no disparagement when I describe a formula as numerological. When a numerological formula is proposed, then we may ask whether it is correct. ... I think an appropriate definition of correctness is that the formula has a good explanation, in a Platonic sense, that is, the explanation could be based on a good theory that is not yet known but 'exists' in the universe of possible reasonable ideas.