Kernel (linear Algebra)

Get Kernel Linear Algebra essential facts below. View Videos or join the Kernel Linear Algebra discussion. Add Kernel Linear Algebra to your PopFlock.com topic list for future reference or share this resource on social media.
## Properties

## Application to modules

## In functional analysis

## Representation as matrix multiplication

### Subspace properties

### The row space of a matrix

### Left null space

### Nonhomogeneous systems of linear equations

## Illustration

## Examples

## Computation by Gaussian elimination

## Numerical computation

### Exact coefficients

### Floating point computation

## See also

## Notes and references

## Bibliography

## External links

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Kernel Linear Algebra

In mathematics, more specifically in linear algebra and functional analysis, the **kernel** of a linear mapping, also known as the **null space** or **nullspace**, is the set of vectors in the domain of the mapping which are mapped to the zero vector.^{[1]}^{[2]} That is, given a linear map between two vector spaces *V* and *W*, the kernel of *L* is the set of all elements **v** of *V* for which , where **0** denotes the zero vector in *W*,^{[3]} or more symbolically:

The kernel of *L* is a linear subspace of the domain *V*.^{[4]}^{[3]}
In the linear map , two elements of *V* have the same image in *W* if and only if their difference lies in the kernel of *L*:

From this, it follows that the image of *L* is isomorphic to the quotient of *V* by the kernel:

In the case where *V* is finite-dimensional, this implies the rank-nullity theorem:

where, by *rank* we mean the dimension of the image of *L*, and by *nullity* that of the kernel of *L*.^{[5]}

When *V* is an inner product space, the quotient can be identified with the orthogonal complement in *V* of ker(*L*). This is the generalization to linear operators of the row space, or coimage, of a matrix.

The notion of kernel also makes sense for homomorphisms of modules, which are generalizations of vector spaces where the scalars are elements of a ring, rather than a field. The domain of the mapping is a module, with the kernel constituting a submodule. Here, the concepts of rank and nullity do not necessarily apply.

If *V* and *W* are topological vector spaces such that *W* is finite-dimensional, then a linear operator *L*: *V* -> *W* is continuous if and only if the kernel of *L* is a closed subspace of *V*.

Consider a linear map represented as a *m* × *n* matrix *A* with coefficients in a field *K* (typically or ), that is operating on column vectors *x* with *n* components over *K*.
The kernel of this linear map is the set of solutions to the equation , where **0** is understood as the zero vector. The dimension of the kernel of *A* is called the **nullity** of *A*. In set-builder notation,

The matrix equation is equivalent to a homogeneous system of linear equations:

Thus the kernel of *A* is the same as the solution set to the above homogeneous equations.

The kernel of a matrix *A* over a field *K* is a linear subspace of **K**^{n}. That is, the kernel of *A*, the set Null(*A*), has the following three properties:

- Null(
*A*) always contains the zero vector, since . - If and , then . This follows from the distributivity of matrix multiplication over addition.
- If and
*c*is a scalar , then , since .

The product *A***x** can be written in terms of the dot product of vectors as follows:

Here, **a**_{1}, ... , **a**_{m} denote the rows of the matrix *A*. It follows that **x** is in the kernel of *A*, if and only if **x** is orthogonal (or perpendicular) to each of the row vectors of *A* (since orthogonality is defined as having a dot product of 0).

The row space, or coimage, of a matrix *A* is the span of the row vectors of *A*. By the above reasoning, the kernel of *A* is the orthogonal complement to the row space. That is, a vector **x** lies in the kernel of *A*, if and only if it is perpendicular to every vector in the row space of *A*.

The dimension of the row space of *A* is called the rank of *A*, and the dimension of the kernel of *A* is called the **nullity** of *A*. These quantities are related by the rank-nullity theorem

^{[5]}

The **left null space**, or cokernel, of a matrix *A* consists of all column vectors **x** such that **x**^{T}*A* = **0**^{T}, where T denotes the transpose of a matrix. The left null space of *A* is the same as the kernel of *A*^{T}. The left null space of *A* is the orthogonal complement to the column space of *A*, and is dual to the cokernel of the associated linear transformation. The kernel, the row space, the column space, and the left null space of *A* are the **four fundamental subspaces** associated to the matrix *A*.

The kernel also plays a role in the solution to a nonhomogeneous system of linear equations:

If **u** and **v** are two possible solutions to the above equation, then

Thus, the difference of any two solutions to the equation *A***x** = **b** lies in the kernel of *A*.

It follows that any solution to the equation *A***x** = **b** can be expressed as the sum of a fixed solution **v** and an arbitrary element of the kernel. That is, the solution set to the equation *A***x** = **b** is

Geometrically, this says that the solution set to *A***x** = **b** is the translation of the kernel of *A* by the vector **v**. See also Fredholm alternative and flat (geometry).

The following is a simple illustration of the computation of the kernel of a matrix (see § Computation by Gaussian elimination, below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel.

Consider the matrix

The kernel of this matrix consists of all vectors (*x*, *y*, *z*) ? **R**^{3} for which

which can be expressed as a homogeneous system of linear equations involving *x*, *y*, and *z*:

The same linear equations can also be written in matrix form as:

Through Gauss-Jordan elimination, the matrix can be reduced to:

Rewriting the matrix in equation form yields:

The elements of the kernel can be further expressed in parametric form, as follows:

Since *c* is a free variable ranging over all real numbers, this can be expressed equally well as:

The kernel of *A* is precisely the solution set to these equations (in this case, a line through the origin in **R**^{3}). Here, since the vector (-1,-26,16)^{T} constitutes a basis of the kernel of *A*. The nullity of *A* is 1.

The following dot products are zero:

which illustrates that vectors in the kernel of *A* are orthogonal to each of the row vectors of *A*.

These two (linearly independent) row vectors span the row space of *A*--a plane orthogonal to the vector (-1,-26,16)^{T}.

With the rank 2 of *A*, the nullity 1 of *A*, and the dimension 3 of *A*, we have an illustration of the rank-nullity theorem.

- If
*L*:**R**^{m}->**R**^{n}, then the kernel of*L*is the solution set to a homogeneous system of linear equations. As in the above illustration, if*L*is the operator:

- then the kernel of
*L*is the set of solutions to the equations

- Let
*C*[0,1] denote the vector space of all continuous real-valued functions on the interval [0,1], and define*L*:*C*[0,1] ->**R**by the rule

- Then the kernel of
*L*consists of all functions*f*?*C*[0,1] for which*f*(0.3) = 0.

- Let
*C*^{?}(**R**) be the vector space of all infinitely differentiable functions**R**->**R**, and let*D*:*C*^{?}(**R**) ->*C*^{?}(**R**) be the differentiation operator:

- Then the kernel of
*D*consists of all functions in*C*^{?}(**R**) whose derivatives are zero, i.e. the set of all constant functions.

- Let
**R**^{?}be the direct product of infinitely many copies of**R**, and let*s*:**R**^{?}->**R**^{?}be the shift operator

- Then the kernel of
*s*is the one-dimensional subspace consisting of all vectors (*x*_{1}, 0, 0, ...).

- If
*V*is an inner product space and*W*is a subspace, the kernel of the orthogonal projection*V*->*W*is the orthogonal complement to*W*in*V*.

A basis of the kernel of a matrix may be computed by Gaussian elimination.

For this purpose, given an *m* × *n* matrix *A*, we construct first the row augmented matrix where *I* is the *n* × *n* identity matrix.

Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix A basis of the kernel of *A* consists in the non-zero columns of *C* such that the corresponding column of *B* is a zero column.

In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero.

For example, suppose that

Then

Putting the upper part in column echelon form by column operations on the whole matrix gives

The last three columns of *B* are zero columns. Therefore, the three last vectors of *C*,

are a basis of the kernel of *A*.

Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that reduces to means that there exists an invertible matrix such that with in column echelon form. Thus and A column vector belongs to the kernel of (that is ) if and only where As is in column echelon form, if and only if the nonzero entries of correspond to the zero columns of By multiplying by , one may deduce that this is the case if and only if is a linear combination of the corresponding columns of

The problem of computing the kernel on a computer depends on the nature of the coefficients.

If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed by Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use modular arithmetic and Chinese remainder theorem, which reduces the problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication).^{[]}

For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware.^{[]}

For matrices whose entries are floating-point numbers, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned, i.e. it has a low condition number.^{[6]}^{[]}

Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed by any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library.^{[]}

**^**"The Definitive Glossary of Higher Mathematical Jargon -- Null".*Math Vault*. 2019-08-01. Retrieved .**^**Weisstein, Eric W. "Kernel".*mathworld.wolfram.com*. Retrieved .- ^
^{a}^{b}"Kernel (Nullspace) | Brilliant Math & Science Wiki".*brilliant.org*. Retrieved . **^**Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang's lecture.- ^
^{a}^{b}Weisstein, Eric W. "Rank-Nullity Theorem".*mathworld.wolfram.com*. Retrieved . **^**"Archived copy" (PDF). Archived from the original (PDF) on 2017-08-29. Retrieved .CS1 maint: archived copy as title (link)

- Axler, Sheldon Jay (1997),
*Linear Algebra Done Right*(2nd ed.), Springer-Verlag, ISBN 0-387-98259-0. - Lay, David C. (2005),
*Linear Algebra and Its Applications*(3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7. - Meyer, Carl D. (2001),
*Matrix Analysis and Applied Linear Algebra*, Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0-89871-454-8, archived from the original on 2009-10-31. - Poole, David (2006),
*Linear Algebra: A Modern Introduction*(2nd ed.), Brooks/Cole, ISBN 0-534-99845-3. - Anton, Howard (2005),
*Elementary Linear Algebra (Applications Version)*(9th ed.), Wiley International. - Leon, Steven J. (2006),
*Linear Algebra With Applications*(7th ed.), Pearson Prentice Hall. - Lang, Serge (1987).
*Linear Algebra*. Springer. ISBN 9780387964126. - Trefethen, Lloyd N.; Bau, David III (1997),
*Numerical Linear Algebra*, SIAM, ISBN 978-0-89871-361-9.

This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.

Popular Products

Music Scenes

Popular Artists