Thermodynamic temperature is a quantity defined in thermodynamics as distinct from kinetic theory or statistical mechanics. A thermodynamic temperature reading of zero is of particular importance for the third law of thermodynamics. By courtesy, it reported on the Kelvin scale of temperature in which the unit of measure is the kelvin (unit symbol: K). For comparison, a temperature of 295 K is equal to 21.85 °C and 71.33 °F.
At the zero point of thermodynamic temperature, absolute zero, the particle constituents of matter have minimal motion and can become no colder.^{[1]}^{[2]} Absolute zero, which is a temperature of zero kelvins (0 K), is precisely equal to 273.15 °C and 459.67 °F. Matter at absolute zero has no remaining transferable average kinetic energy and the only remaining particle motion is due to an everpervasive quantum mechanical phenomenon called zeropoint energy.^{[3]} Though the atoms in, for instance, a container of liquid helium that was precisely at absolute zero would still jostle slightly due to zeropoint energy, a theoretically perfect heat engine with such helium as one of its working fluids could never transfer any net kinetic energy (heat energy) to the other working fluid and no thermodynamic work could occur.
Temperature is generally expressed in absolute terms when scientifically examining temperature's interrelationships with certain other physical properties of matter such as its volume or pressure (see GayLussac's law), or the wavelength of its emitted blackbody radiation. Absolute temperature is also useful when calculating chemical reaction rates (see Arrhenius equation). Furthermore, absolute temperature is typically used in cryogenics and related phenomena like superconductivity, as per the following example usage: "Conveniently, tantalum's transition temperature (T_{c}) of 4.4924 kelvin is slightly above the 4.2221 K boiling point of helium."
The International System of Units (SI) specifies the Kelvin scale for measuring thermodynamic temperature, and the unit of measure kelvin (unit symbol: K) for specific values along the scale. The kelvin is also used for denoting temperature intervals (a span or difference between two temperatures) as per the following example usage: "A 60/40 tin/lead solder is noneutectic and is plastic through a range of 5 kelvins as it solidifies." A temperature interval of one degree Celsius is the same magnitude as one kelvin.
The magnitude of the kelvin was redefined in 2019 in relation to the very physical property underlying thermodynamic temperature: the kinetic energy of atomic particle motion. The redefinition fixed the Boltzmann constant at precisely (J/K).^{[4]} The compound unit of measure for the Boltzmann constant is often also given as J·K^{1}, which may seem abstract due the multiplication dot (·) and a kelvin symbol that is followed by a superscripted negative 1 exponent, however this is merely another mathematical syntax denoting the same measure: joules (the SI unit for energy, including kinetic energy) per kelvin.
The property that imbues any substances with a temperature can be readily understood by examining the ideal gas law, which relates, per the Boltzmann constant, how heat energy causes precisely defined changes in the pressure and temperature of certain gases. This is because monatomic gases like helium and argon behave kinetically like perfectly elastic and spherical billiard balls that move only in a specific subset of the possible vibrational motions that can occur in matter: that comprising the three translational degrees of freedom. The translational degrees of freedom are the familiar billiard balllike movements in the X, Y, and Z axis of 3D space (see Fig. 1, below). This is why the noble gases all have the same specific heat capacity per atom and why that value is lowest of all the gases.
Molecules (two or more chemically bound atoms), however, have internal structure and therefore have additional internal degrees of freedom, (see Fig. 3, below), which makes molecules absorb more heat energy for any given amount of temperature rise than do the monoatomic gases. Heat energy is born in all available degrees of freedom; this is in accordance with the equipartition theorem, so all available internal degrees of freedom have the same temperature as their three external degrees of freedom. However, the property that gives all gases their pressure, which is the net force per unit area on a container arising from gas particles recoiling off it, is a function of the kinetic energy borne in the atoms' and molecules' three translational degrees of freedom.^{[5]}
Fixing the Boltzmann constant at specific value, along with other rule making, had the effect of precisely establishing the magnitude of the unit interval of thermodynamic temperature, the kelvin, in terms of the average kinetic behavior of the noble gases. Moreover, the starting point of the thermodynamic temperature scale, absolute zero, was reaffirmed as the point at which zero average kinetic energy remains in a sample; the only remaining particle motion being that comprising random vibrations due to zeropoint energy.
Though there have been many other temperature scales throughout history, there have been only two scales for measuring thermodynamic temperature where absolute zero is their null point (0): The Kelvin scale and the Rankine scale.
Throughout the scientific world where modern measurements are nearly always made using the International System of Units, thermodynamic temperature is measured using the Kelvin scale. The Rankine scale is part of English engineering units in the United States and finds use in certain engineering fields, particularly in legacy reference works. The Rankine scale uses the degree Rankine (symbol: °R) as its unit, which is the same magnitude as the degree Fahrenheit (symbol: °F).
A unit increment of one degree Rankine is precisely 1.8 times smaller in magnitude than one kelvin; thus, to convert a specific temperature on the Kelvin scale to the Rankine scale, and to convert from a temperature on the Rankine scale to the Kelvin scale, Consequently, absolute zero is "0" for both scales, but the melting point of water ice (0 °C and 273.15 K) is 491.67 °R.
To convert temperature intervals (a span or difference between two temperatures), one uses the same formulas from the preceding paragraph; for instance, a range of 5 kelvins is precisely equal to a range of 9 degrees Rankine.
For 65 years, between 1954 and the 2019 redefinition of the SI base units, a temperature interval of one kelvin was defined as 1/273.16 the difference between the triple point of water and absolute zero. The 1954 resolution by the International Bureau of Weights and Measures (known by the Frenchlanguage acronym BIPM), plus later resolutions and publications, defined the triple point of water as precisely 273.16 K and acknowledged that it was "common practice" to accept that due to previous conventions (namely, that 0 °C had long been defined as the melting point of water and that the triple point of water had long been experimentally determined to be indistinguishably close to 0.01 °C), the difference between the Celsius scale and Kelvin scale is accepted as 273.15 kelvins; which is to say, 0 °C equals 273.15 kelvins.^{[6]} The net effect of this as well as later resolutions was twofold: 1) they defined absolute zero as precisely 0 K, and 2) they defined that the triple point of special isotopically controlled water called Vienna Standard Mean Ocean Water was precisely 273.16 kelvins and 0.01 °C. One effect of the aforementioned resolutions was that the melting point of water, while very close to 273.15 kelvin and 0 °C, was not a defining value and was subject to refinement with more precise measurements.
The 1954 BIPM standard did a good job of establishingwithin the uncertainties due to isotopic variations between water samplestemperatures around the freezing and triple points of water, but required that intermediate values between the triple point and absolute zero, as well as extrapolated values from room temperature and beyond, to be experimentally determined via apparatus and procedures in individual labs. This shortcoming was addressed by the International Temperature Scale of 1990, or ITS‑90, which defined 13 additional points, from 13.8033 K, to 1,357.77 K. While definitional, ITS‑90 hadand still hassome challenges, partly because eight of its extrapolated values depend upon the melting or freezing points of metal samples, which must remain exceedingly pure lest their melting or freezing points be affectedusually depressed.
The 2019 redefinition of the SI base units was primarily for the purpose of decoupling much of the SI system's definitional underpinnings from the kilogram, which was the last physical artifact defining an SI base unit (a platinum/iridium cylinder stored under three nested bell jars in a safe located in France) and which had highly questionable stability. The solution required that four physical constants, including the Boltzmann constant, be definitionally fixed.
Assigning the Boltzmann constant a precisely defined value had no practical effect on modern thermometry except for the most exquisitely precise measurements. Before the redefinition, the triple point of water was exactly 273.16 K and 0.01 °C and the Boltzmann constant was experimentally determined to be , where the "(51)" denotes the uncertainty in the two least significant digits (the 03) and equals a relative standard uncertainty of 0.37 ppm.^{[7]} Afterwards, by defining the Boltzmann constant as exactly , the 0.37 ppm uncertainty was transferred to the triple point of water, which became an experimentally determined value of 273.1600 ±0.0001 K (0.0100 ±0.0001 °C). That the triple point of water ended up being exceedingly close to 273.16 K after the SI redefinition was no accident; the final value of the Boltzmann constant was determined, in part, through clever experiments with argon and helium that used the triple point of water for their key reference temperature.^{[8]} ^{[9]}
Notwithstanding the 2019 redefinition, water triplepoint cells continue to serve in modern thermometry as exceedingly precise calibration references at 273.16 K and 0.01 °C. Moreover, the triple point of water remains one of the 14 calibration points comprising ITS‑90, which spans from the triple point of hydrogen (13.8033 K) to the freezing point of copper (1,357.77 K), which is a nearly hundredfold range of thermodynamic temperature.
The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three X, Y, and Zaxis dimensions of space means the particles move in the three spatial degrees of freedom. This particular form of kinetic energy is sometimes referred to as kinetic temperature. Translational motion is but one form of heat energy and is what gives gases not only their temperature, but also their pressure and the vast majority of their volume. This relationship between the temperature, pressure, and volume of gases is established by the ideal gas law's formula pV = nRT and is embodied in the gas laws.
Though the kinetic energy borne exclusively in the three translational degrees of freedom comprise the thermodynamic temperature of a substance, molecules, as can be seen in Fig. 3, can have other degrees of freedom, all of which fall under three categories: bond length, bond angle, and rotational. All three additional categories are not necessarily available to all molecules, and even for molecules that can experience all three, some can be "frozen out" below a certain temperature. Nonetheless, all those degrees of freedom that are available to the molecules under a particular set of conditions contribute to the specific heat capacity of a substance; which is to say, they increase the amount of heat (kinetic energy) required to raise a given amount of the substance by one kelvin or one degree Celsius.
The relationship of kinetic energy, mass, and velocity is given by the formula E_{k} = 1/2mv^{2}.^{[10]} Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the velocity.
The extent to which the kinetic energy of translational motion in a statistically significant collection of atoms or molecules in a gas contributes to the pressure and volume of that gas is a proportional function of thermodynamic temperature as established by the Boltzmann constant (symbol: k_{B}). The Boltzmann constant also relates the thermodynamic temperature of a gas to the mean kinetic energy of an individual particles' translational motion as follows:
where:
While the Boltzmann constant is useful for finding the mean kinetic energy in a sample of particles, it's important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occurs across a wide range of speeds (see animation in Fig. 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the MaxwellBoltzmann distribution. The graph shown here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s (0.2092 s/km). However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the xaxis to the right). This graph uses inverse speed for its xaxis so the shape of the curve can easily be compared to the curves in Fig. 5 below. In both graphs, zero on the xaxis represents infinite temperature. Additionally, the x and yaxis on both graphs are scaled proportionally.
Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast^{[11]} and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a recordsetting cold temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool cesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second to in order to calculate their temperature.^{[12]} Formulas for calculating the velocity and speed of translational motion are given in the following footnote.^{[13]}
It is neither difficult to imagine atomic motions due to kinetic temperature, nor distinguish between such motions and those due to zeropoint energy. Consider the following hypothetical thought experiment, as illustrated in Fig. 2.5 at left, with an atom that is exceedingly close to absolute zero. Imagine peering through a common optical microscope set to 400 power, which is about the maximum practical magnification for optical microscopes. Such microscopes generally provide fields of view a bit over 0.4 mm in diameter. At the center of the field of view is a single levitated argon atom (argon comprises about 0.93% of air) that is illuminated and glowing against a dark backdrop. If this argon atom was at a beyondrecordsetting onetrillionth of a kelvin above absolute zero,^{[14]} and was moving perpendicular to the field of view towards the right, it would require 13.9 seconds to move from the center of the image to the 200micron tick mark; this travel distance is about the same as the width of the period at the end of this sentence on modern computer monitors. As the argon atom slowly moved, the positional jitter due to zeropoint energy would be much less than the 200nanometer (0.0002 mm) resolution of an optical microscope. Importantly, the atom's translational velocity of 14.43 microns per second constitutes all its retained kinetic energy due to not being precisely at absolute zero. Were the atom precisely at absolute zero, imperceptible jostling due to zeropoint energy would cause it to very slightly wander, but the atom would perpetually be located, on average, at the same spot within the field of view. This is analogous to a boat that has had its motor turned off and is now bobbing slightly in relatively calm and windless ocean waters; even though the boat randomly drifts to and fro, it stays in the same spot in the long term and makes no headway through the water. Accordingly, an atom that was precisely at absolute zero would not be "motionless," and yet, a statistically significant collection of such atoms would have zero net kinetic energy available to transfer to any other collection of atoms. This is because regardless of the kinetic temperature of the second collection of atoms, they too experience the effects of zeropoint energy. Such are the consequences of statistical mechanics and the nature of thermodynamics.
As mentioned above, there are other ways molecules can jiggle besides the three translational degrees of freedom that imbue substances with their kinetic temperature. As can be seen in the animation at right, molecules are complex objects; they are a population of atoms and thermal agitation can strain their internal chemical bonds in three different ways: via rotation, bond length, and bond angle movements; these are all types of internal degrees of freedom. This makes molecules distinct from monatomic substances (consisting of individual atoms) like the noble gases helium and argon, which have only the three translational degrees of freedom (the X, Y, and Z axis). Kinetic energy is stored in molecules' internal degrees of freedom, which gives them an internal temperature. Even though these motions are called "internal," the external portions of molecules still moverather like the jiggling of a stationary water balloon. This permits the twoway exchange of kinetic energy between internal motions and translational motions with each molecular collision. Accordingly, as internal energy is removed from molecules, both their kinetic temperature (the kinetic energy of translational motion) and their internal temperature simultaneously diminish in equal proportions. This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of nonlocal thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum.
The^{ }kinetic energy stored internally in molecules causes substances to contain more heat energy at any given temperature and to absorb additional internal energy for a given temperature increase. This is because any kinetic energy that is, at a given instant, bound in internal motions is not at that same instant contributing to the molecules' translational motions.^{[15]} This extra kinetic energy simply increases the amount of internal energy a substance absorbs for a given temperature rise. This property is known as a substance's specific heat capacity.
Different molecules absorb different amounts of internal energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances' molecules possess more internal degrees of freedom than others do. For instance, roomtemperature nitrogen, which is a diatomic molecule, has five active degrees of freedom: the three comprising translational motion plus two rotational degrees of freedom internally. Not surprisingly, in accordance with the equipartition theorem, nitrogen has fivethirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases.^{[16]} Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of heat energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom.
Heat conduction is the diffusion of thermal energy from hot parts of a system to cold parts. A system can be either a single bulk entity or a plurality of discrete bulk entities. The term bulk in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever thermal energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases).
One particular heat conduction mechanism occurs when translational motion, the particle motion underlying temperature, transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) diffuse throughout the volume of the gas through serial collisions, but entire molecules or atoms can move forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quicklyespecially for light atoms or molecules; convection speeds this process even more.^{[17]}
Translational motion in solids, however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets that travel at the speed of sound of a given substance. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phononbased heat conduction is usually inefficient^{[18]} and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam.
Metals however, are not restricted to only phononbased heat conduction. Thermal energy conducts through metals extraordinarily quickly because instead of direct moleculetomolecule collisions, the vast majority of thermal energy is mediated via very light, mobile conduction electrons. This is why there is a nearperfect correlation between metals' thermal conductivity and their electrical conductivity.^{[19]} Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized (i.e., not tied to a specific atom) and behave rather like a sort of quantum gas due to the effects of zeropoint energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only 1⁄1836 that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88 g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion,
Law #3: All forces occur in pairs, and these two forces are equal in magnitude and opposite in direction.
However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner, because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they're delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abundant conduction electrons.
Thermal radiation is a byproduct of the collisions arising from various vibrational motions of atoms. These collisions cause the electrons of the atoms to emit thermal photons (known as blackbody radiation). Photons are emitted anytime an electric charge is accelerated (as happens when electron clouds of two atoms collide). Even individual molecules with internal temperatures greater than absolute zero also emit blackbody radiation from their atoms. In any bulk quantity of a substance at equilibrium, blackbody photons are emitted across a range of wavelengths in a spectrum that has a bell curvelike shape called a Planck curve (see graph in Fig. 5 at right). The top of a Planck curve (the peak emittance wavelength) is located in a particular part of the electromagnetic spectrum depending on the temperature of the blackbody. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see Table of common temperatures).
Blackbody radiation diffuses thermal energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Blackbody photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process.
As established by the StefanBoltzmann law, the intensity of blackbody radiation increases as the fourth power of absolute temperature. Thus, a blackbody at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, blackbody radiation can be the principal mechanism by which thermal energy escapes a system.
The full range of the thermodynamic temperature scale, from absolute zero to absolute hot, and some notable points between them are shown in the table below.
kelvin  Peak emittance wavelength^{[20]} of blackbody photons  
Absolute zero (precisely by definition) 
0 K  ? ^{[3]} 
Coldest measured temperature^{ }^{[21]} 
450 pK  6,400 km 
One millikelvin (precisely by definition) 
0.001 K  2.897 77 m (Radio, FM band)^{[22]} 
Cosmic microwave background radiation 
2.725 K  1.063 mm (peak wavelength) 
Water's triple point  273.16 K  10.6083 µm (Long wavelength I.R.) 
ISO 1 standard temperature for precision metrology (precisely 20 °C by definition) 
293.15 K  µm (Long wavelength I.R.) 
Incandescent lamp^{[A]}  2500 K^{[B]}  1.16 µm (Near infrared)^{[C]} 
Sun's visible surface^{[C]}^{[23]}  5778 K  501.5 nm (Green light) 
Lightning bolt's channel 
28,000 K  100 nm (Far Ultraviolet light) 
Sun's core  16 MK  0.18 nm (Xrays) 
Thermonuclear explosion (peak temperature)^{[24]} 
350 MK  8.3 × 10^{3} nm (Gamma rays) 
Sandia National Labs' Z machine^{[D]}^{[25]} 
2 GK  1.4 × 10^{3} nm (Gamma rays) 
Core of a highmass star on its last day^{[26]} 
3 GK  1 × 10^{3} nm (Gamma rays) 
Merging binary neutron star system ^{[27]} 
350 GK  8 × 10^{6} nm (Gamma rays) 
Gammaray burst progenitors^{[28]}  1 TK  3 × 10^{6} nm (Gamma rays) 
CERN's proton vs. nucleus collisions^{[29]} 
10 TK  3 × 10^{7} nm (Gamma rays) 
Universe 5.391 × 10^{44} s after the Big Bang 
1.417 × 10^{32} K  1.616 × 10^{26} nm (Planck frequency)^{[30]} 
The kinetic energy of particle motion is just one contributor to the total thermal energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The thermal energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to break chemical bonds (such as during evaporation and melting). Almost everyone is familiar with the effects of phase transitions; for instance, steam at 100 °C can cause severe burns much faster than the 100 °C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin.
Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box heading from blue to green.
At one specific thermodynamic point, the melting point (which is 0 °C across a wide pressure range in the case of water), all the atoms or molecules are, on average, at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are allornothing forces: they either hold fast, or break; there is no inbetween state. Consequently, when a substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specific quantity of its atoms or molecules,^{[31]} converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional thermal energy can't make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), thermal energy must be removed from a substance.
As stated above, the thermal energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it's called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements.^{[32]} If the substance is one of the monatomic gases, (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole.^{[33]} Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the thermal energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals' ratios are even greater, typically in the range of 400 to 1200 times.^{[34]} And the phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a onedegree increase.^{[35]}
Water's sizable enthalpy of vaporization is why one's skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above). In the opposite direction, this is why one's skin feels cool as liquid water on it evaporates (a process that occurs at a subambient wetbulb temperature that is dependent on relative humidity). Water's highly energetic enthalpy of vaporization is also an important factor underlying why solar pool covers (floating, insulated blankets that cover swimming pools when not in use) are so effective at reducing heating costs: they prevent evaporation. For instance, the evaporation of just 20 mm of water from a 1.29meterdeep pool chills its water 8.4 degrees Celsius (15.1 °F).
The total energy of all particle motion translational and internal, including that of conduction electrons, plus the potential energy of phase changes, plus zeropoint energy^{[3]} comprise the internal energy of a substance.
As a substance cools, different forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions is liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules diminish (their kinetic temperature decreases); the internal motions of molecules diminish (their internal temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower; ^{[36]} and blackbody radiation's peak emittance wavelength increases (the photons' energy decreases). When the particles of a substance are as close as possible to complete rest and retain only ZPEinduced quantum mechanical motion, the substance is at the temperature of absolute zero (T = 0).
Note that whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero internal energy; one must be very precise with what one means by internal energy. Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably, T = 0 helium remains liquid at room pressure (Fig. 9 at right) and must be under a pressure of at least 25 bar (2.5 MPa) to crystallize. This is because helium's heat of fusion (the energy required to melt helium ice) is so low (only 21 joules per mole) that the motioninducing effect of zeropoint energy is sufficient to prevent it from freezing at lower pressures.
A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals). These are known as solidsolid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one.
The above complexities make for rather cumbersome blanket statements regarding the internal energy in T = 0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowestenergy crystal lattice such those with a closestpacked arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the everpresent background of zeropoint energy.^{[3]}^{ }^{[37]} One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration).^{[38]} Lastly, it is always true to say that all T = 0 substances contain zero kinetic thermal energy.^{[3]}^{ }^{[13]}
Thermodynamic temperature is useful not only for scientists, it can also be useful for laypeople in many disciplines involving gases. By expressing variables in absolute terms and applying GayLussac's law of temperature/pressure proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change affects the pressure inside an automobile tire. If the tire has a cold gage pressure of 200 kPa, then its absolute pressure is 300 kPa.^{[39]}^{[40]}^{[41]} Room temperature ("cold" in tire terms) is 296 K. If the tire temperature is 20 °C hotter (20 kelvins), the solution is calculated as 316 K/296 K = 6.8% greater thermodynamic temperature and absolute pressure; that is, an absolute pressure of 320 kPa, which is a gage pressure of 220 kPa.
The thermodynamic temperature is closely linked to the ideal gas law and its consequences. It can be linked also to the second law of thermodynamics. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratio T_{2}/T_{1} of two temperaturesT_{1} andT_{2} is the same in all absolute scales.
Strictly speaking, the temperature of a system is welldefined only if it is at thermal equilibrium. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena.
Loosely stated, temperature differences dictate the direction of heat between two systems such that their combined energy is maximally distributed among their lowest possible states. We call this distribution "entropy". To better understand the relationship between temperature and entropy, consider the relationship between heat, work and temperature illustrated in the Carnot heat engine. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, T_{H}, and a lower temperature heat sink, T_{C}, through a gas filled piston. The work done per cycle is equal to the difference between the heat supplied to the engine by T_{H}, q_{H}, and the heat supplied to T_{C} by the engine, q_{C}. The efficiency of the engine is the work divided by the heat put into the system or

where is the work done per cycle. Thus the efficiency depends only on q_{C}/q_{H}.
Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures T_{1} and T_{2} must have the same efficiency, that is to say, the efficiency is the function of only temperatures

In addition, a reversible heat engine operating between temperatures T_{1} and T_{3} must have the same efficiency as one consisting of two cycles, one between T_{1} and another (intermediate) temperature T_{2}, and the second between T_{2} andT_{3}. If this were not the case, then energy (in the form of Q) will be wasted or gained, resulting in different overall efficiencies every time a cycle is split into component cycles; clearly a cycle can be composed of any number of smaller cycles.
With this understanding of Q_{1}, Q_{2} and Q_{3}, mathematically,
But the first function is NOT a function of T_{2}, therefore the product of the final two functions MUST result in the removal of T_{2} as a variable. The only way is therefore to define the function f as follows:
and
so that
i.e. The ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any monotonic function for our ; it is a matter of convenience and convention that we choose . Choosing then one fixed reference temperature (i.e. triple point of water), we establish the thermodynamic temperature scale.
Such a definition coincides with that of the ideal gas derivation; also it is this definition of the thermodynamic temperature that enables us to represent the Carnot efficiency in terms of T_{H} and T_{C}, and hence derive that the (complete) Carnot cycle is isentropic:

Substituting this back into our first formula for efficiency yields a relationship in terms of temperature:

Notice that for T_{C}=0 the efficiency is 100% and that efficiency becomes greater than 100% for T_{C}<0, which cases are unrealistic. Subtracting the right hand side of Equation 4 from the middle portion and rearranging gives
where the negative sign indicates heat ejected from the system. The generalization of this equation is Clausius theorem, which suggests the existence of a state function (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by

where the subscript indicates heat transfer in a reversible process. The function corresponds to the entropy of the system, mentioned previously, and the change of around any cycle is zero (as is necessary for any state function). Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat (to avoid logic loop, we should first define entropy through statistical mechanics):
For a system in which the entropy is a function of its energy , the thermodynamic temperature is therefore given by
so that the reciprocal of the thermodynamic temperature is the rate of increase of entropy with energy.
17021703: Guillaume Amontons (16631705) published two papers that may be used to credit him as being the first researcher to deduce the existence of a fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while endeavoring to improve upon the air thermometers in use at the time. His Jtube thermometers comprised a mercury column that was supported by a fixed mass of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship of gas under constant pressure. His measurements of the boiling point of water and the melting point of ice showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air was supporting, the reduction in air volume at the ice point was always the same ratio. This observation led him to posit that a sufficient reduction in temperature would reduce the air volume to zero. In fact, his calculations projected that absolute zero was equivalent to 240 °Conly 33.15 degrees short of the true value of 273.15 °C. Amonton's discovery of a onetoone relationship between absolute temperature and absolute pressure was rediscovered a century later and popularized within the scientific community by Joseph Louis GayLussac. Today, this principle of thermodynamics is commonly known as GayLussac's law but is also known as Amonton's law.
1742: Anders Celsius (17011744) created a "backwards" version of the modern Celsius temperature scale. In Celsius's original scale, zero represented the boiling point of water and 100 represented the melting point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ice's melting point was effectively unaffected by pressure. He also determined with remarkable precision how water's boiling point varied as a function of atmospheric pressure. He proposed that zero on his temperature scale (water's boiling point) would be calibrated at the mean barometric pressure at mean sea level.
1744: Coincident with the death of Anders Celsius, the famous botanist Carl Linnaeus (17071778) effectively reversed^{[42]} Celsius's scale upon receipt of his first thermometer featuring a scale where zero represented the melting point of ice and 100 represented water's boiling point. The custommade linnaeusthermometer, for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time. For the next 204 years, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the centigrade scale were often reported simply as degrees or, when greater specificity was desired, degrees centigrade. The symbol for temperature values on this scale was °C (in several formats over the years). Because the term centigrade was also the Frenchlanguage name for a unit of angular measurement (onehundredth of a right angle) and had a similar connotation in other languages, the term "centesimal degree" was used when very precise, unambiguous language was required by international standards bodies such as the International Bureau of Weights and Measures (Bureau international des poids et mesures) (BIPM). The 9th CGPM (General Conference on Weights and Measures (Conférence générale des poids et mesures) and the CIPM (International Committee for Weights and Measures (Comité international des poids et mesures) formally adopted^{[43]} degree Celsius (symbol: °C) in 1948.^{[44]}
1777: In his book Pyrometrie (Berlin: Haude & Spener, 1779) completed four months before his death, Johann Heinrich Lambert (17281777), sometimes incorrectly referred to as Joseph Lambert, proposed an absolute temperature scale based on the pressure/temperature relationship of a fixed volume of gas. This is distinct from the volume/temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert stated that absolute zero was the point where a simple straightline extrapolation reached zero gas pressure and was equal to 270 °C.
Circa 1787: Notwithstanding the work of Guillaume Amontons 85 years earlier, Jacques Alexandre César Charles (17461823) is often credited with discovering, but not publishing, that the volume of a gas under constant pressure is proportional to its absolute temperature. The formula he created was V_{1}/T_{1} = V_{2}/T_{2}.
1802: Joseph Louis GayLussac (17781850) published work (acknowledging the unpublished lab notes of Jacques Charles fifteen years earlier) describing how the volume of gas under constant pressure changes linearly with its absolute (thermodynamic) temperature. This behavior is called Charles's Law and is one of the gas laws. His are the first known formulas to use the number 273 for the expansion coefficient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to 273 °C).
1848: William Thomson, (18241907) also known as Lord Kelvin, wrote in his paper, On an Absolute Thermometric Scale, of the need for a scale whereby infinite cold (absolute zero) was the scale's zero point, and which used the degree Celsius for its unit increment. Like GayLussac, Thomson calculated that absolute zero was equivalent to 273 °C on the air thermometers of the time. This absolute scale is known today as the kelvin thermodynamic temperature scale. It's noteworthy that Thomson's value of 273 was actually derived from 0.00366, which was the accepted expansion coefficient of gas per degree Celsius relative to the ice point. The inverse of 0.00366 expressed to five significant digits is 273.22 °C which is remarkably close to the true value of 273.15 °C.
1859: Macquorn Rankine (18201872) proposed a thermodynamic temperature scale similar to William Thomson's but which used the degree Fahrenheit for its unit increment. This absolute scale is known today as the Rankine thermodynamic temperature scale.
18771884: Ludwig Boltzmann (18441906) made major contributions to thermodynamics through an understanding of the role that particle kinetics and black body radiation played. His name is now attached to several of the formulas used today in thermodynamics.
Circa 1930s: Gas thermometry experiments carefully calibrated to the melting point of ice and boiling point of water showed that absolute zero was equivalent to 273.15 °C.
1948: Resolution 3 of the 9th CGPM (Conférence Générale des Poids et Mesures, also known as the General Conference on Weights and Measures) fixed the triple point of water at precisely 0.01 °C. At this time, the triple point still had no formal definition for its equivalent kelvin value, which the resolution declared "will be fixed at a later date". The implication is that if the value of absolute zero measured in the 1930s was truly 273.15 °C, then the triple point of water (0.01 °C) was equivalent to 273.16 K. Additionally, both the CIPM (Comité international des poids et mesures, also known as the International Committee for Weights and Measures) and the CGPM formally adopted the name Celsius for the degree Celsius and the Celsius temperature scale. ^{[44]}
1954: Resolution 3 of the 10th CGPM gave the kelvin scale its modern definition by choosing the triple point of water as its upper defining point (with no change to absolute zero being the null point) and assigning it a temperature of precisely 273.16 kelvins (what was actually written 273.16 degrees Kelvin at the time). This, in combination with Resolution 3 of the 9th CGPM, had the effect of defining absolute zero as being precisely zero kelvins and 273.15 °C.
1967/1968: Resolution 3 of the 13th CGPM renamed the unit increment of thermodynamic temperature kelvin, symbol K, replacing degree absolute, symbol °K. Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water".
2005: The CIPM (Comité International des Poids et Mesures, also known as the International Committee for Weights and Measures) affirmed that for the purposes of delineating the temperature of the triple point of water, the definition of the kelvin thermodynamic temperature scale would refer to water having an isotopic composition defined as being precisely equal to the nominal specification of Vienna Standard Mean Ocean Water.
2019: In November 2018, the 26th General Conference on Weights and Measures (CGPM) changed the definition of the Kelvin by fixing the Boltzmann constant to when expressed in the unit J/K. This change (and other changes in the definition of SI units) was made effective on the 144th anniversary of the Metre Convention, 20 May 2019.
Although absolute zero (T=0) is not a state of zero molecular motion, it is the point of zero temperature and, in accordance with the Boltzmann constant, is also the point of zero particle kinetic energy and zero kinetic velocity. To understand how atoms can have zero kinetic velocity and simultaneously be vibrating due to ZPE, consider the following thought experiment: two T=0 helium atoms in zero gravity are carefully positioned and observed to have an average separation of 620 pm between them (a gap of ten atomic diameters). It's an "average" separation because ZPE causes them to jostle about their fixed positions. Then one atom is given a kinetic kick of precisely 83 yoctokelvins (1 yK = ). This is done in a way that directs this atom's velocity vector at the other atom. With 83 yK of kinetic energy between them, the 620 pm gap through their common barycenter would close at a rate of 719 pm/s and they would collide after 0.862 second. This is the same speed as shown in the Fig. 1 animation above. Before being given the kinetic kick, both T=0 atoms had zero kinetic energy and zero kinetic velocity because they could persist indefinitely in that state and relative orientation even though both were being jostled by ZPE. At T=0, no kinetic energy is available for transfer to other systems. The Boltzmann constant and its related formulas describe the realm of particle kinetics and velocity vectors whereas ZPE is an energy field that jostles particles in ways described by the mathematics of quantum mechanics. In atomic and molecular collisions in gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less ZPEinduced particle motion after a given collision as more. This random nature of ZPE is why it has no net effect upon either the pressure or volume of any bulk quantity (a statistically significant quantity of particles) of T>0 K gases. However, in T=0 condensed matter; e.g., solids and liquids, ZPE causes interatomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the realworld effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't freeze unless under a pressure of at least 25 bar or 2.5 MPa), ZPE is very much a form of thermal energy and may properly be included when tallying a substance's internal energy.
Note too that absolute zero serves as the baseline atop which thermodynamics and its equations are founded because they deal with the exchange of thermal energy between "systems" (a plurality of particles and fields modeled as an average). Accordingly, one may examine ZPEinduced particle motion within a system that is at absolute zero but there can never be a net outflow of thermal energy from such a system. Also, the peak emittance wavelength of blackbody radiation shifts to infinity at absolute zero; indeed, a peak no longer exists and blackbody photons can no longer escape. Because of ZPE, however, virtual photons are still emitted at T=0. Such photons are called "virtual" because they can't be intercepted and observed. Furthermore, this zeropoint radiation has a unique zeropoint spectrum. However, even though a T=0 system emits zeropoint radiation, no net heat flow Q out of such a system can occur because if the surrounding environment is at a temperature greater than T=0, heat will flow inward, and if the surrounding environment is at T=0, there will be an equal flux of ZP radiation both inward and outward. A similar Q equilibrium exists at T=0 with the ZPEinduced spontaneous emission of photons (which is more properly called a stimulated emission in this context). The graph at upper right illustrates the relationship of absolute zero to zeropoint energy. The graph also helps in the understanding of how zeropoint energy got its name: it is the vibrational energy matter retains at the zerokelvin point. Derivation of the classical electromagnetic zeropoint radiation spectrum via a classical thermodynamic operation involving van der Waals forces, Daniel C. Cole, Physical Review A, 42 (1990) 1847.