Equation Solving

Get Equation Solving essential facts below. View Videos or join the Equation Solving discussion. Add Equation Solving to your PopFlock.com topic list for future reference or share this resource on social media.

## Overview

## Solution sets

## Methods of solution

### Brute force, trial and error, inspired guess

### Elementary algebra

### Systems of linear equations

### Polynomial equations

### Diophantine equations

### Inverse functions

### Factorization

### Numerical methods

### Matrix equations

### Differential equations

## See also

## References

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Equation Solving

This article relies largely or entirely on a single source. (December 2009) |

In mathematics, to **solve an equation** is to find its **solutions**, which are the values (numbers, functions, sets, etc.) that fulfill the condition stated by the equation, consisting generally of two expressions related by an equality sign. When seeking a solution, one or more free variables are designated as **unknowns**. A solution is an assignment of expressions to the unknown variables that makes the equality in the equation true. In other words, a solution is an expression or a collection of expressions (one for each unknown) such that, when substituted for the unknowns, the equation becomes an identity.
A solution of an equation is often also called a **root** of the equation, particularly but not only for algebraic or numerical equations.

A problem of solving an equation may be numeric or symbolic. Solving an equation **numerically** means that only numbers represented explicitly as numerals (not as an expression involving variables), are admitted as solutions. Solving an equation **symbolically** means that expressions that may contain known variables or possibly also variables not in the original equation are admitted as solutions.

For example, the equation *x* + *y* = 2*x* - 1 is solved for the unknown *x* by the solution *x* = *y* + 1, because substituting *y* + 1 for *x* in the equation results in (*y* + 1) + *y* = 2(*y* + 1) - 1, a true statement. It is also possible to take the variable *y* to be the unknown, and then the equation is solved by *y* = *x* - 1. Or *x* and *y* can both be treated as unknowns, and then there are many solutions to the equation. (*x*, *y*) = (*a* + 1, *a*) is a symbolic solution. Instantiating a symbolic solution with specific numbers always gives a numerical solution; for example, *a* = 0 gives (*x*, *y*) = (1, 0) (that is, *x* = 1 and *y* = 0) and *a* = 1 gives (*x*, *y*) = (2, 1). Note that the distinction between known variables and unknown variables is made in the statement of the problem, rather than the equation. However, in some areas of mathematics the convention is to reserve some variables as known and others as unknown. When writing polynomials, the coefficients are usually taken to be known and the indeterminates to be unknown, but depending on the problem, all variables may assume either role.

Depending on the problem, the task may be to find any solution (finding a single solution is enough) or all solutions. The set of all solutions is called the solution set. In the example above, the solution (*x*, *y*) = (*a* + 1, *a*) is also a parametrization of the solution set with the parameter being *a*. It is also possible that the task is to find a solution, among possibly many, that is *best* in some respect; problems of that nature are called optimization problems; solving an optimization problem is generally not referred to as "equation solving".

A wording such as "an equation **in** *x* and *y*", or "solve **for** *x* and *y*", implies that the unknowns are as indicated: in these cases *x* and *y*.

In one general case, we have a situation such as

*?*(*x*_{1},...,*x*_{n}) =*c*,

where *x*_{1},...,*x*_{n} are the unknowns, and *c* is a constant. Its solutions are the members of the inverse image

*?*^{ -1}[*c*] = {(*a*_{1},...,*a*_{n}) ?*T*_{1}×···×*T*_{n}|*?*(*a*_{1},...,*a*_{n}) =*c*},

where *T*_{1}×···×*T*_{n} is the domain of the function *?*. Note that the set of solutions can be the empty set (there are no solutions), a singleton (there is exactly one solution), finite, or infinite (there are infinitely many solutions).

For example, an equation such as

- 3
*x*+ 2*y*= 21*z*

with unknowns *x*, *y* and *z*, can be solved by first modifying the equation in some way while keeping it equivalent, such as subtracting 21*z* from both sides of the equation to obtain

- 3
*x*+ 2*y*− 21*z*= 0

In this particular case there is not just *one* solution to this equation, but an infinite set of solutions, which can be written

- {(
*x*,*y*,*z*) | 3*x*+ 2*y*− 21*z*= 0}.

One particular solution is *x* = 0, *y* = 0, *z* = 0. Two other solutions are *x* = 3, *y* = 6, *z* = 1, and *x* = 8, *y* = 9, *z* = 2. In fact, this particular set of solutions describes a *plane* in three-dimensional space, which passes through the three points with these coordinates.

The solution set of a given set of equations or inequalities is the set of all its solutions, a solution being a tuple of values, one for each unknown, that satisfies all equations or inequalities.
If the solution set is empty, then there are no values *x*_{i} such that the equations or inequalities becomes true simultaneously.

For example, let us examine a classic one-variable case. Using the squaring function on the integers, that is, the function *?* whose domain are the integers (the whole numbers) defined by:

*?*(*x*) =*x*^{2},

consider the equation

*?*(*x*) = 2.

Its solution set is {}, the empty set, since 2 is not the square of an integer, so no integer solves this equation. However note that in attempting to find solutions for this equation, if we modify the function's definition – more specifically, the function's *domain*, we can find solutions to this equation. So, if we were instead to define that the domain of *?* consists of the real numbers, the equation above has two solutions, and its solution set is

- {, -}.

We have already seen that certain solutions sets can describe surfaces. For example, in studying elementary mathematics, one knows that the solution set of an equation in the form *ax* + *by* = *c* with *a*, *b*, and *c* real-valued constants, with *a* and *b* not both equal to zero, forms a line in the vector space **R**^{2}. However, it may not always be easy to graphically depict solutions sets – for example, the solution set to an equation in the form *ax* + *by* + *cz* + *dw* = *k* (with *a*, *b*, *c*, *d*, and *k* real-valued constants) is a hyperplane.

The methods for solving equations generally depend on the type of equation, both the kind of expressions in the equation and the kind of values that may be assumed by the unknowns. The variety in types of equations is large, and so are the corresponding methods. Only a few specific types are mentioned below.

In general, given a class of equations, there may be no known systematic method (algorithm) that is guaranteed to work. This may be due to a lack of mathematical knowledge; some problems were only solved after centuries of effort. But this also reflects that, in general, no such method can exist: some problems are known to be unsolvable by an algorithm, such as Hilbert's tenth problem, which was proved unsolvable in 1970.

For several classes of equations, algorithms have been found for solving them, some of which have been implemented and incorporated in computer algebra systems, but often require no more sophisticated technology than pencil and paper. In some other cases, heuristic methods are known that are often successful but that are not guaranteed to lead to success.

If the solution set of an equation is restricted to a finite set (as is the case for equations in modular arithmetic, for example), or can be limited to a finite number of possibilities (as is the case with some Diophantine equations), the solution set can be found by brute force, that is, by testing each of the possible values (candidate solutions). It may be the case, though, that the number of possibilities to be considered, although finite, is so huge that an exhaustive search is not practically feasible; this is, in fact, a requirement for strong encryption methods.

As with all kinds of problem solving, trial and error may sometimes yield a solution, in particular where the form of the equation, or its similarity to another equation with a known solution, may lead to an "inspired guess" at the solution. If a guess, when tested, fails to be a solution, consideration of the way in which it fails may lead to a modified guess.

Equations involving linear or simple rational functions of a single real-valued unknown, say *x*, such as

can be solved using the methods of elementary algebra.

Smaller systems of linear equations can be solved likewise by methods of elementary algebra. For solving larger systems, algorithms are used that are based on linear algebra.

Polynomial equations of degree up to four can be solved exactly using algebraic methods, of which the quadratic formula is the simplest example. Polynomial equations with a degree of five or higher require in general numerical methods (see below) or special functions such as Bring radicals, although some specific cases may be solvable algebraically, for example

- 4
*x*^{5}-*x*^{3}- 3 = 0

(by using the rational root theorem), and

*x*^{6}- 5*x*^{3}+ 6 = 0,

(by using the substitution *x* = *z*^{1/3}, which simplifies this to a quadratic equation in *z*).

In Diophantine equations the solutions are required to be integers. In some cases a brute force approach can be used, as mentioned above. In some other cases, in particular if the equation is in one unknown, it is possible to solve the equation for rational-valued unknowns (see Rational root theorem), and then find solutions to the Diophantine equation by restricting the solution set to integer-valued solutions. For example, the polynomial equation

has as rational solutions *x* = -1/2 and *x* = 3, and so, viewed as a Diophantine equation, it has the unique solution *x* = 3.

In general, however, Diophantine equations are among the most difficult equations to solve.

In the simple case of a function of one variable, say, *h*(*x*), we can solve an equation of the form

*h*(*x*) =*c*,*c*constant

by considering what is known as the *inverse function* of *h*.

Given a function *h* : *A* -> *B*, the inverse function, denoted *h*^{−1}, defined as *h*^{−1} : *B* -> *A* is a function such that

*h*^{−1}(*h*(*x*)) =*h*(*h*^{−1}(*x*)) =*x*.

Now, if we apply the inverse function to both sides of

*h*(*x*) =*c*, where*c*is a constant value in*B*,

we obtain

*h*^{−1}(*h*(*x*)) =*h*^{−1}(*c*)*x*=*h*^{−1}(*c*)

and we have found the solution to the equation. However, depending on the function, the inverse may be difficult to be defined, or may not be a function on all of the set *B* (only on some subset), and have many values at some point.

If just one solution will do, instead of the full solution set, it is actually sufficient if only the functional identity

*h*(*h*^{−1}(*x*)) =*x*

holds. For example, the projection ?_{1} : **R**^{2} -> **R** defined by ?_{1}(*x*, *y*) = *x* has no post-inverse, but it has a pre-inverse ?_{1}^{-1} defined by ?_{1}^{-1}(*x*) = (*x*, 0). Indeed, the equation

- ?
_{1}(*x*,*y*) =*c*

is solved by

- (
*x*,*y*) = ?_{1}^{-1}(*c*) = (*c*, 0).

Examples of inverse functions include the *n*th root (inverse of *x*^{n}); the logarithm (inverse of *a*^{x}); the inverse trigonometric functions; and Lambert's W function (inverse of *x*e^{x}).

If the left-hand side expression of an equation *P* = 0 can be factorized as *P* = *QR*, the solution set of the original solution consists of the union of the solution sets of the two equations *Q* = 0 and *R* = 0.
For example, the equation

can be rewritten, using the identity tan *x* cot *x* = 1 as

which can be factorized into

The solutions are thus the solutions of the equation tan *x* = 1, and are thus the set

With more complicated equations in real or complex numbers, simple methods to solve equations can fail. Often, root-finding algorithms like the Newton-Raphson method can be used to find a numerical solution to an equation, which, for some applications, can be entirely sufficient to solve some problem.

Equations involving matrices and vectors of real numbers can often be solved by using methods from linear algebra.

There is a vast body of methods for solving various kinds of differential equations, both numerically and analytically. A particular class of problem that can be considered to belong here is integration, and the analytic methods for solving this kind of problems are now called symbolic integration.^{[]} Solutions of differential equations can be *implicit* or *explicit*.^{[1]}

- Extraneous and missing solutions
- Simultaneous equations
- Equating coefficients
- Solving the geodesic equations
- Unification (computer science) — solving equations involving symbolic expressions

**^**Dennis G. Zill (15 March 2012).*A First Course in Differential Equations with Modeling Applications*. Cengage Learning. ISBN 1-285-40110-7.

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Popular Products

Music Scenes

Popular Artists