Its name derives from the concept of overtones, or harmonics in music: the wavelengths of the overtones of a vibrating string are , , , etc., of the string's fundamental wavelength. Every term of the series after the first is the harmonic mean of the neighboring terms; the phrase harmonic mean likewise derives from music.
The divergence of the harmonic series was first proven in the 14th century by Nicole Oresme, but this achievement fell into obscurity. Proofs were given in the 17th century by Pietro Mengoli and by Johann Bernoulli, the latter proof published and popularized by his brother Jacob Bernoulli.
Historically, harmonic sequences have had a certain popularity with architects. This was so particularly in the Baroque period, when architects used them to establish the proportions of floor plans, of elevations, and to establish harmonic relationships between both interior and exterior architectural details of churches and palaces.
There are several well-known proofs of the divergence of the harmonic series. A few of them are given below.
One way to prove divergence is to compare the harmonic series with another divergent series, where each denominator is replaced with the next-largest power of two:
Each term of the harmonic series is greater than or equal to the corresponding term of the second series, and therefore the sum of the harmonic series must be greater than or equal to the sum of the second series. However, the sum of the second series is infinite:
It follows (by the comparison test) that the sum of the harmonic series must be infinite as well. More precisely, the comparison above proves that
This proof, proposed by Nicole Oresme in around 1350, is considered by many in the mathematical community[by whom?] to be a high point of medieval mathematics. It is still a standard proof taught in mathematics classes today. Cauchy's condensation test is a generalization of this argument.
It is possible to prove that the harmonic series diverges by comparing its sum with an improper integral. Specifically, consider the arrangement of rectangles shown in the figure to the right. Each rectangle is 1 unit wide and units high, so the total area of the infinite number of rectangles is the sum of the harmonic series:
Additionally, the total area under the curve y = from 1 to infinity is given by a divergent improper integral:
Since this area is entirely contained within the rectangles, the total area of the rectangles must be infinite as well. More precisely, this proves that
The generalization of this argument is known as the integral test.
where ? is the Euler-Mascheroni constant and ?k ~ which approaches 0 as k goes to infinity. Leonhard Euler proved both this and also the more striking fact that the sum which includes only the reciprocals of primes also diverges, i.e.
|n||Partial sum of the harmonic series, Hn|
|expressed as a fraction||decimal||relative size|
The finite partial sums of the diverging harmonic series,
are called harmonic numbers.
The difference between Hn and ln n converges to the Euler-Mascheroni constant. The difference between any two harmonic numbers is never an integer. No harmonic numbers are integers, except for H1 = 1.:p. 24:Thm. 1
The alternating harmonic series, while conditionally convergent, is not absolutely convergent: if the terms in the series are systematically rearranged, in general the sum becomes different and, dependent on the rearrangement, possibly even infinite.
A related series can be derived from the Taylor series for the arctangent:
This is known as the Leibniz series.
The general harmonic series is of the form
where a ? 0 and b are real numbers, and is not zero or a negative integer.
By the limit comparison test with the harmonic series, all general harmonic series also diverge.
A generalization of the harmonic series is the p-series (or hyperharmonic series), defined as
for any real number p. When p = 1, the p-series is the harmonic series, which diverges. Either the integral test or the Cauchy condensation test shows that the p-series converges for all p > 1 (in which case it is called the over-harmonic series) and diverges for all p . If p > 1 then the sum of the p-series is ?(p), i.e., the Riemann zeta function evaluated at p.
The problem of finding the sum for p = 2 is called the Basel problem; Leonhard Euler showed it is . The value of the sum for p = 3 is called Apéry's constant, since Roger Apéry proved that it is an irrational number.
Related to the p-series is the ln-series, defined as
for any positive real number p. This can be shown by the integral test to diverge for p but converge for all p > 1.
For any convex, real-valued function ? such that
The random harmonic series
where the sn are independent, identically distributed random variables taking the values +1 and -1 with equal is a well-known example in probability theory for a series of random variables that converges with probability 1. The fact of this convergence is an easy consequence of either the Kolmogorov three-series theorem or of the closely related Kolmogorov maximal inequality. Byron Schmuland of the University of Alberta further examined the properties of the random harmonic series, and showed that the convergent series is a random variable with some interesting properties. In particular, the probability density function of this random variable evaluated at +2 or at -2 takes on the value ..., differing from by less than 10-42. Schmuland's paper explains why this probability is so close to, but not exactly, . The exact value of this probability is given by the infinite cosine product integral C2 divided by ?.
The depleted harmonic series where all of the terms in which the digit 9 appears anywhere in the denominator are removed can be shown to converge to the value 22.92067661926415034816.... In fact, when all the terms containing any particular string of digits (in any base) are removed, the series converges.
The harmonic series can be counterintuitive to students first encountering it, because it is a divergent series even though the limit of the nth term as n goes to infinity is zero. The divergence of the harmonic series is also the source of some apparent paradoxes. One example of these is the "worm on the rubber band". Suppose that a worm crawls along an infinitely-elastic one-meter rubber band at the same time as the rubber band is uniformly stretched. If the worm travels 1 centimeter per minute and the band stretches 1 meter per minute, will the worm ever reach the end of the rubber band? The answer, counterintuitively, is "yes", for after n minutes, the ratio of the distance travelled by the worm to the total length of the rubber band is
(In fact the actual ratio is a little less than this sum as the band expands continuously.)
Because the series gets arbitrarily large as n becomes larger, eventually this ratio must exceed 1, which implies that the worm reaches the end of the rubber band. However, the value of n at which this occurs must be extremely large: approximately e100, a number exceeding 1043 minutes (1037 years). Although the harmonic series does diverge, it does so very slowly.
Another problem involving the harmonic series is the Jeep problem, which (in one form) asks how much total fuel is required for a jeep with a limited fuel-carrying capacity to cross a desert, possibly leaving fuel drops along the route. The distance that can be traversed with a given amount of fuel is related to the partial sums of the harmonic series, which grow logarithmically. And so the fuel required increases exponentially with the desired distance.
Another example is the block-stacking problem: given a collection of identical dominoes, it is clearly possible to stack them at the edge of a table so that they hang over the edge of the table without falling. The counterintuitive result is that one can stack them in such a way as to make the overhang arbitrarily large, provided there are enough dominoes.
A simpler example, on the other hand, is the swimmer that keeps adding more speed when touching the walls of the pool. The swimmer starts crossing a 10-meter pool at a speed of 2 m/s, and with every cross, another 2 m/s is added to the speed. In theory, the swimmer's speed is unlimited, but the number of pool crosses needed to get to that speed becomes very large; for instance, to get to the speed of light (ignoring special relativity), the swimmer needs to cross the pool 150 million times. Contrary to this large number, the time required to reach a given speed depends on the sum of the series at any given number of pool crosses (iterations):
Calculating the sum (iteratively) shows that to get to the speed of light the time required is only 97 seconds. By continuing beyond this point (exceeding the speed of light, again ignoring special relativity), the time taken to cross the pool will in fact approach zero as the number of iterations becomes very large, and although the time required to cross the pool appears to tend to zero (at an infinite number of iterations), the sum of iterations (time taken for total pool crosses) will still diverge at a very slow rate.