In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix A is denoted det(A), det A, or .
In the case of a 2 × 2 matrix the determinant can be defined as
Similarly, for a 3 × 3 matrix A, its determinant is
Each determinant of a 2 × 2 matrix in this equation is called a minor of the matrix A. This procedure can be extended to give a recursive definition for the determinant of an n × n matrix, known as Laplace expansion.
Determinants occur throughout mathematics. For example, a matrix is often used to represent the coefficients in a system of linear equations, and determinants can be used to solve these equations (Cramer's rule), although other methods of solution are computationally much more efficient. Determinants are used for defining the characteristic polynomial of a matrix, whose roots are the eigenvalues. In geometry, the signed ndimensional volume of a ndimensional parallelepiped is expressed by a determinant. This is used in calculus with exterior differential forms and the Jacobian determinant, in particular for changes of variables in multiple integrals.
The determinant of a 2 × 2 matrix is denoted either by "det" or by vertical bars around the matrix, and is defined as
For example,
The determinant has several key properties that can be proved by direct evaluation of the definition for matrices, and that continue to hold for determinants of larger matrices. They are as follows:^{[1]} first, the determinant of the identity matrix is 1. Second, the determinant is zero if two rows are the same:
This holds similarly if the two columns are the same. Moreover,
Finally, if any column is multiplied by some number (i.e., all entries in that column are multiplied by that number), the determinant is also multiplied by that number:
If the matrix entries are real numbers, the matrix A can be used to represent two linear maps: one that maps the standard basis vectors to the rows of A, and one that maps them to the columns of A. In either case, the images of the basis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogram defined by the rows of the above matrix is the one with vertices at (0, 0), (a, b), (a + c, b + d), and (c, d), as shown in the accompanying diagram.
The absolute value of ad  bc is the area of the parallelogram, and thus represents the scale factor by which areas are transformed by A. (The parallelogram formed by the columns of A is in general a different parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will be the same.)
The absolute value of the determinant together with the sign becomes the oriented area of the parallelogram. The oriented area is the same as the usual area, except that it is negative when the angle from the first to the second vector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for the identity matrix).
To show that ad  bc is the signed area, one may consider a matrix containing two vectors u ? (a, b) and v ? (c, d) representing the parallelogram's sides. The signed area can be expressed as u v sin ? for the angle ? between the vectors, which is simply base times height, the length of one vector times the perpendicular component of the other. Due to the sine this already is the signed area, yet it may be expressed more conveniently using the cosine of the complementary angle to a perpendicular vector, e.g. u^{?} = (b, a), so that u^{?} v cos ?′, which can be determined by the pattern of the scalar product to be equal to ad  bc:
Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by A. When the determinant is equal to one, the linear mapping defined by the matrix is equiareal and orientationpreserving.
The object known as the bivector is related to these ideas. In 2D, it can be interpreted as an oriented plane segment formed by imagining two vectors each with origin (0, 0), and coordinates (a, b) and (c, d). The bivector magnitude (denoted by (a, b) ? (c, d)) is the signed area, which is also the determinant ad  bc.^{[2]}
If an n × n real matrix A is written in terms of its column vectors , then
This means that maps the unit ncube to the ndimensional parallelotope defined by the vectors the region
The determinant gives the signed ndimensional volume of this parallelotope, and hence describes more generally the ndimensional volume scaling factor of the linear transformation produced by A.^{[3]} (The sign shows whether the transformation preserves or reverses orientation.) In particular, if the determinant is zero, then this parallelotope has volume zero and is not fully ndimensional, which indicates that the dimension of the image of A is less than n. This means that A produces a linear transformation which is neither onto nor onetoone, and so is not invertible.
In the sequel, A is a square matrix with n rows and n columns, so that it can be written as
The entries etc. are, for many purposes, real or complex numbers. As discussed below, the determinant is also defined for matrices whose entries are elements in more abstract algebraic structures known as commutative rings.
The determinant of A is denoted by det(A), or it can be denoted directly in terms of the matrix entries by writing enclosing bars instead of brackets:
There are various equivalent ways to define the determinant of a square matrix A, i.e. one with the same number of rows and columns: the determinant can be defined via the Leibniz formula, an explicit formula involving sums of products of certain entries of the matrix. The determinant can also be characterized as the unique function depending on the entries of the matrix satisfying certain properties. This approach can also be used to compute determinants by simplifying the matrices in question.
The Leibniz formula for the determinant of a 3 × 3 matrix is the following:
The rule of Sarrus is a mnemonic for this formula: the sum of the products of three diagonal northwest to southeast lines of matrix elements, minus the sum of the products of three diagonal southwest to northeast lines of elements, when the copies of the first two columns of the matrix are written beside it as in the illustration:
This scheme for calculating the determinant of a 3 × 3 matrix does not carry over into higher dimensions.
The Leibniz formula for the determinant of an matrix is a more involved, but related expression. It is an expression involving the notion of permutations and their signature. A permutation of the set is a function that reorders this set of integers. The value in the th position after the reordering is denoted by . The set of all such permutations, the socalled symmetric group, is denoted . The signature of is defined to be whenever the reordering given by ? can be achieved by successively interchanging two entries an even number of times, and whenever it can be achieved by an odd number of such interchanges. Given the matrix and a permutation , the product
is also written more briefly using Pi notation as
Using these notions, the definition of the determinant using the Leibniz formula is then
a sum involving all permutations, where each summand is a product of entries of the matrix, multiplied with a sign depending on the permutation.
The following table unwinds these terms in the case . In the first column, a permutation is listed according to its values. For example, in the second row, the permutation satisfies . It can be obtained from the standard order (1, 2, 3) by a single exchange (exchanging the second and third entry), so that its signature is .
Permutation  

1, 2, 3  
1, 3, 2  
3, 1, 2  
3, 2, 1  
2, 3, 1  
2, 1, 3 
The sum of the six terms in the third column then reads
This gives back the formula for matrices above. For a general matrix, the Leibniz formula involves (n factorial) summands, each of which is a product of n entries of the matrix.
The Leibniz formula can also be expressed using a summation in which not only permutations, but all sequences of indices in the range occur. To do this, one uses the LeviCivita symbol instead of the sign of a permutation
This gives back the formula above since the LeviCivita symbol is zero if the indices do not form a permutation.^{[4]}^{[5]}
The determinant can be characterized by the following three key properties. To state these, it is convenient to regard an matrix A as being composed of its columns, so denoted as
where the column vector (for each i) is composed of the entries of the matrix in the ith column.
If the determinant is defined using the Leibniz formula as above, these three properties can be proved by direct inspection of that formula. Some authors also approach the determinant directly using these three properties: it can be shown that there is exactly one function that assigns to any matrix A a number that satisfies these three properties.^{[6]} This also shows that this more abstract approach to the determinant yields the same definition as the one using the Leibniz formula.
To see this it suffices to expand the determinant by multilinearity in the columns into a (huge) linear combination of determinants of matrices in which each column is a standard basis vector. These determinants are either 0 (by property 9) or else ±1 (by properties 1 and 12 below), so the linear combination gives the expression above in terms of the LeviCivita symbol. While less technical in appearance, this characterization cannot entirely replace the Leibniz formula in defining the determinant, since without it the existence of an appropriate function is not clear.^{[]}
These rules have several further consequences:
These characterizing properties and their consequences listed above are both theoretically significant, but can also be used to compute determinants for concrete matrices. In fact, Gaussian elimination can be applied to bring any matrix into upper triangular form, and the steps in this algorithm affect the determinant in a controlled way. The following concrete example illustrates the computation of the determinant of the matrix using that method:
Matrix 


 
Obtained by 
add the second column to the first 
add 3 times the third column to the second 
swap the first two columns 
add times the second column to the first 
Determinant 



Combining these equalities gives
The determinant of the transpose of equals the determinant of A:
This can be proven by inspecting the Leibniz formula.^{[7]} This implies that in all the properties mentioned above, the word "column" can be replaced by "row" throughout. For example, viewing an n × n matrix as being composed of n rows, the determinant is an nlinear function.
Thus the determinant is a multiplicative map, i.e., for square matrices and of equal size, the determinant of a matrix product equals the product of their determinants:
This key fact can be proven by observing that, for a fixed matrix , both sides of the equation are alternating and multilinear as a function depending on the columns of . Moreover, they both take the value when is the identity matrix. The abovementioned unique characterization of alternating multilinear maps therefore shows this claim.^{[8]}
A matrix is invertible precisely if its determinant is nonzero. This follows from the multiplicativity of and the formula for the inverse involving the adjugate matrix mentioned below. In this event, the determinant of the inverse matrix is given by
In particular, products and inverses of matrices with nonzero determinant (respectively, determinant one) still have this property. Thus, the set of such matrices (of fixed size ) forms a group known as the general linear group (respectively, a subgroup called the special linear group . More generally, the word "special" indicates the subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group (which if n is 2 or 3 consists of all rotation matrices), and the special unitary group.
The CauchyBinet formula is a generalization of that product formula for rectangular matrices. This formula can also be recast as a multiplicative formula for compound matrices whose entries are the determinants of all quadratic submatrices of a given matrix.^{[9]}^{[10]}
Laplace expansion expresses the determinant of a matrix in terms of determinants of smaller matrices, known as its minors. The minor is defined to be the determinant of the matrix that results from by removing the th row and the th column. The expression is known as a cofactor. For every , one has the equality
which is called the Laplace expansion along the ith row. For example, the Laplace expansion along the first row () gives the following formula:
Unwinding the determinants of these matrices gives back the Leibniz formula mentioned above. Similarly, the Laplace expansion along the th column is the equality
Laplace expansion can be used iteratively for computing determinants, but this approach is inefficient for large matrices. However, it is useful for computing the determinants of highly symmetric matrix such as the Vandermonde matrix
The adjugate matrix is the transpose of the matrix of the cofactors, that is,
For every matrix, one has^{[11]}
Thus the adjugate matrix can be used for expressing the inverse of a nonsingular matrix:
The formula for the determinant of a matrix above continues to hold, under appropriate further assumptions, for a block matrix, i.e., a matrix composed of four submatrices of dimension , , and , respectively. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is
If is invertible (and similarly if is invertible^{[12]}), one has
If is a matrix, this simplifies to .
If the blocks are square matrices of the same size further formulas hold. For example, if and commute (i.e., ), then there holds ^{[13]}
This formula has been generalized to matrices composed of more than blocks, again under appropriate commutativity conditions among the individual blocks.^{[14]}
For and , the following formula holds (even if and do not commute)^{[]}
Sylvester's determinant theorem states that for A, an m × n matrix, and B, an n × m matrix (so that A and B have dimensions allowing them to be multiplied in either order forming a square matrix):
where I_{m} and I_{n} are the m × m and n × n identity matrices, respectively.
From this general result several consequences follow.
The determinant of the sum of two square matrices of the same size is not in general expressible in terms of the determinants of A and of B. However, for positive semidefinite matrices , and of equal size, , for with the corollary ^{[16]}^{[17]}
The determinant is closely related to two other central concepts in linear algebra, the eigenvalues and the characteristic polynomial of a matrix. Let be an matrix with complex entries with eigenvalues . (Here it is understood that an eigenvalue with algebraic multiplicity ? occurs ? times in this list.) Then the determinant of A is the product of all eigenvalues,
The product of all nonzero eigenvalues is referred to as pseudodeterminant.
The characteristic polynomial is defined as^{[18]}
Here, is the indeterminate of the polynomial and is the identity matrix of the same size as . By means of this polynomial, determinants can be used to find the eigenvalues of the matrix : they are precisely the roots of this polynomial, i.e., those complex numbers such that
A Hermitian matrix is positive definite if all its eigenvalues are positive. Sylvester's criterion asserts that this is equivalent to the determinants of the submatrices
being positive, for all between and .^{[19]}
The trace tr(A) is by definition the sum of the diagonal entries of A and also equals the sum of the eigenvalues. Thus, for complex matrices A,
or, for real matrices A,
Here exp(A) denotes the matrix exponential of A, because every eigenvalue ? of A corresponds to the eigenvalue exp(?) of exp(A). In particular, given any logarithm of A, that is, any matrix L satisfying
the determinant of A is given by
For example, for n = 2, n = 3, and n = 4, respectively,
cf. CayleyHamilton theorem. Such expressions are deducible from combinatorial arguments, Newton's identities, or the FaddeevLeVerrier algorithm. That is, for generic n, detA = (1)^{n}c_{0} the signed constant term of the characteristic polynomial, determined recursively from
In the general case, this may also be obtained from^{[20]}
where the sum is taken over the set of all integers k_{l} >= 0 satisfying the equation
The formula can be expressed in terms of the complete exponential Bell polynomial of n arguments s_{l} = (l  1)! tr(A^{l}) as
This formula can also be used to find the determinant of a matrix A^{I}_{J} with multidimensional indices I = (i_{1}, i_{2}, ..., i_{r}) and J = (j_{1}, j_{2}, ..., j_{r}). The product and trace of such matrices are defined in a natural way as
An important arbitrary dimension n identity can be obtained from the Mercator series expansion of the logarithm when the expansion converges. If every eigenvalue of A is less than 1 in absolute value,
where I is the identity matrix. More generally, if
is expanded as a formal power series in s then all coefficients of s^{m} for m > n are zero and the remaining polynomial is det(I + sA).
For a positive definite matrix A, the trace operator gives the following tight lower and upper bounds on the log determinant
with equality if and only if A = I. This relationship can be derived via the formula for the KLdivergence between two multivariate normal distributions.
Also,
These inequalities can be proved by expressing the traces and the determinant in terms of the eigenvalues. As such, they represent the wellknown fact that the harmonic mean is less than the geometric mean, which is less than the arithmetic mean, which is, in turn, less than the root mean square.
The Leibniz formula shows that the determinant of real (or analogously for complex) square matrices is a polynomial function from to . In particular, it is everywhere differentiable. Its derivative can be expressed using Jacobi's formula:^{[21]}
where denotes the adjugate of . In particular, if is invertible, we have
Expressed in terms of the entries of , these are
Yet another equivalent formulation is
using big O notation. The special case where , the identity matrix, yields
This identity is used in describing Lie algebras associated to certain matrix Lie groups. For example, the special linear group is defined by the equation . The above formula shows that its Lie algebra is the special linear Lie algebra consisting of those matrices having trace zero.
Writing a matrix as where are column vectors of length 3, then the gradient over one of the three vectors may be written as the cross product of the other two:
Historically, determinants were used long before matrices: A determinant was originally defined as a property of a system of linear equations. The determinant "determines" whether the system has a unique solution (which occurs precisely if the determinant is nonzero). In this sense, determinants were first used in the Chinese mathematics textbook The Nine Chapters on the Mathematical Art (?, Chinese scholars, around the 3rd century BCE). In Europe, solutions of linear systems of two equations were expressed by Cardano in 1545 by a determinantlike entity.^{[22]}
Determinants proper originated from the work of Seki Takakazu in 1683 in Japan and parallelly of Leibniz in 1693.^{[23]}^{[24]}^{[25]}^{[26]} Cramer (1750) stated, without proof, Cramer's rule.^{[27]} Both Cramer and also Bezout (1779) were led to determinants by the question of plane curves passing through a given set of points.^{[28]}
Vandermonde (1771) first recognized determinants as independent functions.^{[24]} Laplace (1772) gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde had already given a special case.^{[29]} Immediately following, Lagrange (1773) treated determinants of the second and third order and applied it to questions of elimination theory; he proved many special cases of general identities.
Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers. He introduced the word "determinant" (Laplace had used "resultant"), though not in the present signification, but rather as applied to the discriminant of a quantic.^{[30]} Gauss also arrived at the notion of reciprocal (inverse) determinants, and came very near the multiplication theorem.
The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product of two matrices of m columns and n rows, which for the special case of m = n reduces to the multiplication theorem. On the same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on the subject. (See CauchyBinet formula.) In this he used the word "determinant" in its present sense,^{[31]}^{[32]} summarized and simplified what was then known on the subject, improved the notation, and gave the multiplication theorem with a proof more satisfactory than Binet's.^{[24]}^{[33]} With him begins the theory in its generality.
(Jacobi 1841) used the functional determinant which Sylvester later called the Jacobian.^{[34]} In his memoirs in Crelle's Journal for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839) and Cayley began their work. Cayley 1841 introduced the modern notation for the determinant using vertical bars.^{[35]}^{[36]}
The study of special forms of determinants has been the natural result of the completion of the general theory. Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants by Sylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, in connection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so called by Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians and Hessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.
Determinants can be used to describe the solutions of a linear system of equations, written in matrix form as . This equation has a unique solution if and only if is nonzero. In this case, the solution is given by Cramer's rule:
where is the matrix formed by replacing the th column of by the column vector . This follows immediately by column expansion of the determinant, i.e.
where the vectors are the columns of A. The rule is also implied by the identity
Cramer's rule can be implemented in time, which is comparable to more common methods of solving systems of linear equations, such as LU, QR, or singular value decomposition.^{[37]}
Determinants can be used to characterize linearly dependent vectors: is zero if and only if the column vectors (or, equivalently, the row vectors) of the matrix are linearly dependent.^{[38]} For example, given two linearly independent vectors , a third vector lies in the plane spanned by the former two vectors exactly if the determinant of the matrix consisting of the three vectors is zero. The same idea is also used in the theory of differential equations: given functions (supposed to be times differentiable), the Wronskian is defined to be
It is nonzero (for some ) in a specified interval if and only if the given functions and all their derivatives up to order are linearly independent. If it can be shown that the Wronskian is zero everywhere on an interval then, in the case of analytic functions, this implies the given functions are linearly dependent. See the Wronskian and linear independence. Another such use of the determinant is the resultant, which gives a criterion when two polynomials have a common root.^{[39]}
The determinant can be thought of as assigning a number to every sequence of n vectors in R^{n}, by using the square matrix whose columns are the given vectors. For instance, an orthogonal matrix with entries in R^{n} represents an orthonormal basis in Euclidean space. The determinant of such a matrix determines whether the orientation of the basis is consistent with or opposite to the orientation of the standard basis. If the determinant is +1, the basis has the same orientation. If it is 1, the basis has the opposite orientation.
More generally, if the determinant of A is positive, A represents an orientationpreserving linear transformation (if A is an orthogonal 2 × 2 or 3 × 3 matrix, this is a rotation), while if it is negative, A switches the orientation of the basis.
As pointed out above, the absolute value of the determinant of real vectors is equal to the volume of the parallelepiped spanned by those vectors. As a consequence, if is the linear map given by multiplication with a matrix , and is any measurable subset, then the volume of is given by times the volume of .^{[40]} More generally, if the linear map is represented by the matrix , then the dimensional volume of is given by:
By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines. The volume of any tetrahedron, given its vertices , , or any other combination of pairs of vertices that form a spanning tree over the vertices.
For a general differentiable function, much of the above carries over by considering the Jacobian matrix of f. For
the Jacobian matrix is the n × n matrix whose entries are given by the partial derivatives
Its determinant, the Jacobian determinant, appears in the higherdimensional version of integration by substitution: for suitable functions f and an open subset U of R^{n} (the domain of f), the integral over f(U) of some other function ? : R^{n} > R^{m} is given by
The Jacobian also occurs in the inverse function theorem.
The above identities concerning the determinant of products and inverses of matrices imply that similar matrices have the same determinant: two matrices A and B are similar, if there exists an invertible matrix X such that A = X^{1}BX. Indeed, repeatedly applying the above identities yields
The determinant is therefore also called a similarity invariant. The determinant of a linear transformation
for some finitedimensional vector space V is defined to be the determinant of the matrix describing it, with respect to an arbitrary choice of basis in V. By the similarity invariance, this determinant is independent of the choice of the basis for V and therefore only depends on the endomorphism T.
The above definition of the determinant using the Leibniz rule holds works more generally when the entries of the matrix are elements of a commutative ring , such as the integers , as opposed to the field of real or complex numbers. Moreover, the characterization of the determinant as the unique alternating multilinear map that satisfies still holds, as do all the properties that result from that characterization.^{[41]}
A matrix is invertible (in the sense that there is an inverse matrix whose entries are in ) if and only if its determinant is an invertible element in .^{[42]} For , this means that the determinant is +1 or 1. Such a matrix is called unimodular.
The determinant being multiplicative, it defines a group homomorphism
between the general linear group (the group of invertible matrices with entries in ) and the multiplicative group of units in . Since it respects the multiplication in both groups, this map is a group homomorphism.
Given a ring homomorphism , there is a map given by replacing all entries in by their images under . The determinant respects these maps, i.e., the identity
holds. In other words, the displayed commutative diagram commutes.
For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant of its conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction modulo of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo (the latter determinant being computed using modular arithmetic). In the language of category theory, the determinant is a natural transformation between the two functors and .^{[43]} Adding yet another layer of abstraction, this is captured by saying that the determinant is a morphism of algebraic groups, from the general linear group to the multiplicative group,
The determinant of a linear transformation of an dimensional vector space or, more generally a free module of (finite) rank over a commutative ring can be formulated in a coordinatefree manner by considering the th exterior power of .^{[44]} The map induces a linear map
As is onedimensional, the map is given by multiplying with some scalar, i.e., an element in . Some authors such as (Bourbaki 1998) use this fact to define the determinant to be the element in satisfying the following identity (for all ):
This definition agrees with the more concrete coordinatedependent definition. This can be shown using the unicity of a multilinear alternating form on tuples of vectors in . For this reason, the highest nonzero exterior power (as opposed to the determinant associated to an endomorphism) is sometimes also called the determinant of and similarly for more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrix can also be cast in this setting, by considering lower alternating forms with .^{[45]}
Determinants as treated above admit several variants: the permanent of a matrix is defined as the determinant, except that the factors occurring in Leibniz's rule are omitted. The immanant generalizes both by introducing a character of the symmetric group in Leibniz's rule.
For any associative algebra that is finitedimensional as a vector space over a field , there is a determinant map ^{[46]}
This definition proceeds by establishing the characteristic polynomial independently of the determinant, and defining the determinant as the lowest order term of this polynomial. This general definition recovers the determinant for the matrix algebra , but also includes several further cases including the determinant of a quaternion,
the norm of a field extension, as well as the Pfaffian of a skewsymmetric matrix and the reduced norm of a central simple algebra, also arise as special cases of this construction.
For matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry over directly. For example, in the Leibniz formula, an infinite sum (all of whose terms are infinite products) would have to be calculated. Functional analysis provides different extensions of the determinant for such infinitedimensional situations, which however only work for particular kinds of operators.
The Fredholm determinant defines the determinant for operators known as trace class operators by an appropriate generalization of the formula
Another infinitedimensional notion of determinant is the functional determinant.
For operators in a finite factor, one may define a positive realvalued determinant called the FugledeKadison determinant using the canonical trace. In fact, corresponding to every tracial state on a von Neumann algebra there is a notion of FugledeKadison determinant.
For matrices over noncommutative rings, multilinearity and alternating properties are incompatible for n >= 2,^{[47]} so there is no good definition of the determinant in this setting.
For square matrices with entries in a noncommutative ring, there are various difficulties in defining determinants analogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order for the product is specified, and similarly for other definitions of the determinant, but noncommutativity then leads to the loss of many fundamental properties of the determinant, such as the multiplicative property or that the determinant is unchanged under transposition of the matrix. Over noncommutative rings, there is no reasonable notion of a multilinear form (existence of a nonzero bilinear form^{[clarify]} with a regular element of R as value on some pair of arguments implies that R is commutative). Nevertheless, various notions of noncommutative determinant have been formulated that preserve some of the properties of determinants, notably quasideterminants and the Dieudonné determinant. For some classes of matrices with noncommutative elements, one can define the determinant and prove linear algebra theorems that are very similar to their commutative analogs. Examples include the qdeterminant on quantum groups, the Capelli determinant on Capelli matrices, and the Berezinian on supermatrices (i.e., matrices whose entries are elements of graded rings).^{[48]} Manin matrices form the class closest to matrices with commutative elements.
Determinants are mainly used as a theoretical tool. They are rarely calculated explicitly in numerical linear algebra, where for applications like checking invertibility and finding eigenvalues the determinant has largely been supplanted by other techniques.^{[49]} Computational geometry, however, does frequently use calculations related to determinants.^{[50]}
While the determinant can be computed directly using the Leibniz rule this approach is extremely inefficient for large matrices, since that formula requires calculating ( factorial) products for an matrix. Thus, the number of required operations grows very quickly: it is of order . The Laplace expansion is similarly inefficient. Therefore, more involved techniques have been developed for calculating determinants.
Some methods compute by writing the matrix as a product of matrices whose determinants can be more easily computed. Such techniques are referred to as decomposition methods. Examples include the LU decomposition, the QR decomposition or the Cholesky decomposition (for positive definite matrices). These methods are of order , which is a significant improvement over .^{[51]}
For example, LU decomposition expresses as a product
of a permutation matrix (which has exactly a single in each column, and otherwise zeros), a lower triangular matrix and an upper triangular matrix . The determinants of the two triangular matrices and can be quickly calculated, since they are the products of the respective diagonal entries. The determinant of is just the sign of the corresponding permutation (which is for an even number of permutations and is for an odd number of permutations). Once such a LU decomposition is known for , its determinant is readily computed as
The order reached by decomposition methods has been improved by different methods. If two matrices of order can be multiplied in time , where for some , then there is an algorithm computing the determinant in time .^{[52]} This means, for example, that an algorithm exists based on the CoppersmithWinograd algorithm. This exponent has been further lowered, as of 2016, to 2.373.^{[53]}
In addition to the complexity of the algorithm, further criteria can be used to compare algorithms. Especially for applications concerning matrices over rings, algorithms that compute the determinant without any divisions exist. (By contrast, Gauss elimination requires divisions.) One such algorithm, having complexity is based on the following idea: one replaces permutations (as in the Leibniz rule) by socalled closed ordered walks, in which several items can be repeated. The resulting sum has more terms than in the Leibniz rule, but in the process several of these products can be reused, making it more efficient than naively computing with the Leibniz rule.^{[54]} Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed to store intermediate values occurring in the computation. For example, the Gaussian elimination (or LU decomposition) method is of order , but the bit length of intermediate values can become exponentially long.^{[55]} By comparison, the Bareiss Algorithm, is an exactdivision method (so it does use division, but only in cases where these divisions can be performed without remainder) is of the same order, but the bit complexity is roughly the bit size of the original entries in the matrix times .^{[56]}
If the determinant of A and the inverse of A have already been computed, the matrix determinant lemma allows rapid calculation of the determinant of A + uv^{T}, where u and v are column vectors.
Charles Dodgson (i.e. Lewis Carroll of Alice's Adventures in Wonderland fame) invented a method for computing determinants called Dodgson condensation. Unfortunately this interesting method does not always work in its original form.^{[]}