Kronecker Product
Get Kronecker Product essential facts below. View Videos or join the Kronecker Product discussion. Add Kronecker Product to your PopFlock.com topic list for future reference or share this resource on social media.
Kronecker Product

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. The Kronecker product should not be confused with the usual matrix multiplication, which is an entirely different operation.

The Kronecker product is named after Leopold Kronecker, even though there is little evidence that he was the first to define and use it. The Kronecker product has also been called the Zehfuss matrix, after Johann Georg Zehfuss who in 1858 described this matrix operation, but Kronecker product is currently the most widely used.[1]

## Definition

If A is an matrix and B is a matrix, then the Kronecker product is the block matrix:

${\displaystyle \mathbf {A} \otimes \mathbf {B} ={\begin{bmatrix}a_{11}\mathbf {B} &\cdots &a_{1n}\mathbf {B} \\\vdots &\ddots &\vdots \\a_{m1}\mathbf {B} &\cdots &a_{mn}\mathbf {B} \end{bmatrix}},}$

more explicitly:

${\displaystyle {\mathbf {A} \otimes \mathbf {B} }={\begin{bmatrix}a_{11}b_{11}&a_{11}b_{12}&\cdots &a_{11}b_{1q}&\cdots &\cdots &a_{1n}b_{11}&a_{1n}b_{12}&\cdots &a_{1n}b_{1q}\\a_{11}b_{21}&a_{11}b_{22}&\cdots &a_{11}b_{2q}&\cdots &\cdots &a_{1n}b_{21}&a_{1n}b_{22}&\cdots &a_{1n}b_{2q}\\\vdots &\vdots &\ddots &\vdots &&&\vdots &\vdots &\ddots &\vdots \\a_{11}b_{p1}&a_{11}b_{p2}&\cdots &a_{11}b_{pq}&\cdots &\cdots &a_{1n}b_{p1}&a_{1n}b_{p2}&\cdots &a_{1n}b_{pq}\\\vdots &\vdots &&\vdots &\ddots &&\vdots &\vdots &&\vdots \\\vdots &\vdots &&\vdots &&\ddots &\vdots &\vdots &&\vdots \\a_{m1}b_{11}&a_{m1}b_{12}&\cdots &a_{m1}b_{1q}&\cdots &\cdots &a_{mn}b_{11}&a_{mn}b_{12}&\cdots &a_{mn}b_{1q}\\a_{m1}b_{21}&a_{m1}b_{22}&\cdots &a_{m1}b_{2q}&\cdots &\cdots &a_{mn}b_{21}&a_{mn}b_{22}&\cdots &a_{mn}b_{2q}\\\vdots &\vdots &\ddots &\vdots &&&\vdots &\vdots &\ddots &\vdots \\a_{m1}b_{p1}&a_{m1}b_{p2}&\cdots &a_{m1}b_{pq}&\cdots &\cdots &a_{mn}b_{p1}&a_{mn}b_{p2}&\cdots &a_{mn}b_{pq}\end{bmatrix}}.}$

More compactly, we have ${\displaystyle (A\otimes B)_{p(r-1)+v,q(s-1)+w}=a_{rs}b_{vw}}$

Similarly ${\displaystyle (A\otimes B)_{i,j}=a_{\lfloor (i-1)/p\rfloor +1,\lfloor (j-1)/q\rfloor +1}b_{i-\lfloor (i-1)/p\rfloor p,j-\lfloor (j-1)/q\rfloor q}.}$ Using the identity ${\displaystyle i\%p=i-\lfloor i/p\rfloor p}$, where ${\displaystyle i\%p}$ denotes the remainder of ${\displaystyle i/p}$, this may be written in a more symmetric form

${\displaystyle (A\otimes B)_{i,j}=a_{\lfloor (i-1)/p\rfloor +1,\lfloor (j-1)/q\rfloor +1}b_{(i-1)\%p+1,(j-1)\%q+1}.}$

If A and B represent linear transformations and , respectively, then represents the tensor product of the two maps, .

### Examples

${\displaystyle {\begin{bmatrix}1&2\\3&4\\\end{bmatrix}}\otimes {\begin{bmatrix}0&5\\6&7\\\end{bmatrix}}={\begin{bmatrix}1{\begin{bmatrix}0&5\\6&7\\\end{bmatrix}}&2{\begin{bmatrix}0&5\\6&7\\\end{bmatrix}}\\3{\begin{bmatrix}0&5\\6&7\\\end{bmatrix}}&4{\begin{bmatrix}0&5\\6&7\\\end{bmatrix}}\\\end{bmatrix}}={\begin{bmatrix}1\times 0&1\times 5&2\times 0&2\times 5\\1\times 6&1\times 7&2\times 6&2\times 7\\3\times 0&3\times 5&4\times 0&4\times 5\\3\times 6&3\times 7&4\times 6&4\times 7\\\end{bmatrix}}={\begin{bmatrix}0&5&0&10\\6&7&12&14\\0&15&0&20\\18&21&24&28\end{bmatrix}}.}$

Similarly:

${\displaystyle {\begin{bmatrix}1&-4&7\\-2&3&3\end{bmatrix}}\otimes {\begin{bmatrix}8&-9&-6&5\\1&-3&-4&7\\2&8&-8&-3\\1&2&-5&-1\end{bmatrix}}={\begin{bmatrix}8&-9&-6&5&-32&36&24&-20&56&-63&-42&35\\1&-3&-4&7&-4&12&16&-28&7&-21&-28&49\\2&8&-8&-3&-8&-32&32&12&14&56&-56&-21\\1&2&-5&-1&-4&-8&20&4&7&14&-35&-7\\-16&18&12&-10&24&-27&-18&15&24&-27&-18&15\\-2&6&8&-14&3&-9&-12&21&3&-9&-12&21\\-4&-16&16&6&6&24&-24&-9&6&24&-24&-9\\-2&-4&10&2&3&6&-15&-3&3&6&-15&-3\end{bmatrix}}}$

## Properties

### Relations to other matrix operations

1. Bilinearity and associativity:

The Kronecker product is a special case of the tensor product, so it is bilinear and associative:

{\displaystyle {\begin{aligned}\mathbf {A} \otimes (\mathbf {B} +\mathbf {C} )&=\mathbf {A} \otimes \mathbf {B} +\mathbf {A} \otimes \mathbf {C} ,\\(\mathbf {B} +\mathbf {C} )\otimes \mathbf {A} &=\mathbf {B} \otimes \mathbf {A} +\mathbf {C} \otimes \mathbf {A} ,\\(k\mathbf {A} )\otimes \mathbf {B} &=\mathbf {A} \otimes (k\mathbf {B} )=k(\mathbf {A} \otimes \mathbf {B} ),\\(\mathbf {A} \otimes \mathbf {B} )\otimes \mathbf {C} &=\mathbf {A} \otimes (\mathbf {B} \otimes \mathbf {C} ),\\\mathbf {A} \otimes \mathbf {0} &=\mathbf {0} \otimes \mathbf {A} =\mathbf {0} ,\end{aligned}}}
where A, B and C are matrices, 0 is a zero matrix, and k is a scalar.
2. Non-commutative:

In general, and are different matrices. However, and are permutation equivalent, meaning that there exist permutation matrices P and Q such that[2]

${\displaystyle \mathbf {B} \otimes \mathbf {A} =\mathbf {P} \,(\mathbf {A} \otimes \mathbf {B} )\,\mathbf {Q} .}$

If A and B are square matrices, then and are even permutation similar, meaning that we can take .

The matrices and are perfect shuffle matrices.[3] The perfect shuffle matrix Sp,q can be constructed by taking slices of the Ir identity matrix, where ${\displaystyle r=pq}$.

${\displaystyle \mathbf {S} _{p,q}={\begin{bmatrix}\mathbf {I} _{r}(1:q:r,:)\\\mathbf {I} _{r}(2:q:r,:)\\\vdots \\\mathbf {I} _{r}(q:q:r,:)\end{bmatrix}}}$

MATLAB colon notation is used here to indicate submatrices, and Ir is the identity matrix. If ${\displaystyle \mathbf {A} \in \mathbb {R} ^{m_{1}\times n_{1}}}$ and ${\displaystyle \mathbf {B} \in \mathbb {R} ^{m_{2}\times n_{2}}}$, then

${\displaystyle \mathbf {B} \otimes \mathbf {A} =\mathbf {S} _{m_{1},m_{2}}(\mathbf {A} \otimes \mathbf {B} )\mathbf {S} _{n_{1},n_{2}}^{\textsf {T}}}$
3. The mixed-product property:

If A, B, C and D are matrices of such size that one can form the matrix products AC and BD, then

${\displaystyle (\mathbf {A} \otimes \mathbf {B} )(\mathbf {C} \otimes \mathbf {D} )=(\mathbf {AC} )\otimes (\mathbf {BD} ).}$

This is called the mixed-product property, because it mixes the ordinary matrix product and the Kronecker product.

In particular, using the transpose property from below, this means that if

${\displaystyle \mathbf {A} =\mathbf {Q} \otimes \mathbf {U} }$
and Q and U are orthogonal (or unitary), then A is also orthogonal (resp., unitary).

The mixed-product property also works for the element-wise product. If A and C are matrices of the same size, B and D are matrices of the same size, then

${\displaystyle (\mathbf {A} \otimes \mathbf {B} )\circ (\mathbf {C} \otimes \mathbf {D} )=(\mathbf {A} \circ \mathbf {C} )\otimes (\mathbf {B} \circ \mathbf {D} ).}$
5. The inverse of a Kronecker product:

It follows that is invertible if and only if both A and B are invertible, in which case the inverse is given by

${\displaystyle (\mathbf {A} \otimes \mathbf {B} )^{-1}=\mathbf {A} ^{-1}\otimes \mathbf {B} ^{-1}.}$

The invertible product property holds for the Moore-Penrose pseudoinverse as well,[4] that is

${\displaystyle (\mathbf {A} \otimes \mathbf {B} )^{+}=\mathbf {A} ^{+}\otimes \mathbf {B} ^{+}.}$

In the language of Category theory, the mixed-product property of the Kronecker product (and more general tensor product) shows that the category MatF of matrices over a field F, is in fact a monoidal category, with objects natural numbers n, morphisms are n-by-m matrices with entries in F, composition is given by matrix multiplication, identity arrows are simply identity matrices In, and the tensor product is given by the Kronecker product.[5]

MatF is a concrete skeleton category for the equivalent category FinVectF of finite dimensional vector spaces over F, whose objects are such finite dimensional vector spaces V, arrows are F-linear maps , and identity arrows are the identity maps of the spaces. The equivalence of categories amounts to simultaneously choosing a basis in ever finite-dimensional vector space V over F; matrices' elements represent these mappings with respect to the chosen bases; and likewise the kronecker product is the representation of the tensor product in the chosen bases.
6. Transpose:

Transposition and conjugate transposition are distributive over the Kronecker product:

${\displaystyle (\mathbf {A} \otimes \mathbf {B} )^{\textsf {T}}=\mathbf {A} ^{\textsf {T}}\otimes \mathbf {B} ^{\textsf {T}}}$ and ${\displaystyle (\mathbf {A} \otimes \mathbf {B} )^{*}=\mathbf {A} ^{*}\otimes \mathbf {B} ^{*}.}$
7. Determinant:

Let A be an matrix and let B be an matrix. Then

${\displaystyle \left|\mathbf {A} \otimes \mathbf {B} \right|=\left|\mathbf {A} \right|^{m}\left|\mathbf {B} \right|^{n}.}$
The exponent in || is the order of B and the exponent in || is the order of A.
8. Kronecker sum and exponentiation:

If A is , B is and Ik denotes the identity matrix then we can define what is sometimes called the Kronecker sum, ?, by

${\displaystyle \mathbf {A} \oplus \mathbf {B} =\mathbf {A} \otimes \mathbf {I} _{m}+\mathbf {I} _{n}\otimes \mathbf {B} .}$

This is different from the direct sum of two matrices. This operation is related to the tensor product on Lie algebras.

We have the following formula for the matrix exponential, which is useful in some numerical evaluations.[6]

${\displaystyle \exp({\mathbf {N} \oplus \mathbf {M} })=\exp(\mathbf {N} )\otimes \exp(\mathbf {M} )}$

Kronecker sums appear naturally in physics when considering ensembles of non-interacting systems.[] Let Hi be the Hamiltonian of the ith such system. Then the total Hamiltonian of the ensemble is

${\displaystyle H_{\mathrm {Tot} }=\bigoplus _{i}H^{i}}$.

### Abstract properties

1. Spectrum:

Suppose that A and B are square matrices of size n and m respectively. Let ?1, ..., ?n be the eigenvalues of A and ?1, ..., ?m be those of B (listed according to multiplicity). Then the eigenvalues of are

${\displaystyle \lambda _{i}\mu _{j},\qquad i=1,\ldots ,n,\,j=1,\ldots ,m.}$

It follows that the trace and determinant of a Kronecker product are given by

${\displaystyle \operatorname {tr} (\mathbf {A} \otimes \mathbf {B} )=\operatorname {tr} \mathbf {A} \,\operatorname {tr} \mathbf {B} \quad {\text{and}}\quad \det(\mathbf {A} \otimes \mathbf {B} )=(\det \mathbf {A} )^{m}(\det \mathbf {B} )^{n}.}$
2. Singular values:

If A and B are rectangular matrices, then one can consider their singular values. Suppose that A has rA nonzero singular values, namely

${\displaystyle \sigma _{\mathbf {A} ,i},\qquad i=1,\ldots ,r_{\mathbf {A} }.}$

Similarly, denote the nonzero singular values of B by

${\displaystyle \sigma _{\mathbf {B} ,i},\qquad i=1,\ldots ,r_{\mathbf {B} }.}$

Then the Kronecker product has rArB nonzero singular values, namely

${\displaystyle \sigma _{\mathbf {A} ,i}\sigma _{\mathbf {B} ,j},\qquad i=1,\ldots ,r_{\mathbf {A} },\,j=1,\ldots ,r_{\mathbf {B} }.}$

Since the rank of a matrix equals the number of nonzero singular values, we find that

${\displaystyle \operatorname {rank} (\mathbf {A} \otimes \mathbf {B} )=\operatorname {rank} \mathbf {A} \,\operatorname {rank} \mathbf {B} .}$
3. Relation to the abstract tensor product:

The Kronecker product of matrices corresponds to the abstract tensor product of linear maps. Specifically, if the vector spaces V, W, X, and Y have bases and respectively, and if the matrices A and B represent the linear transformations and , respectively in the appropriate bases, then the matrix represents the tensor product of the two maps, with respect to the basis of and the similarly defined basis of with the property that , where i and j are integers in the proper range.[7]

When V and W are Lie algebras, and and are Lie algebra homomorphisms, the Kronecker sum of A and B represents the induced Lie algebra homomorphisms .
4. Relation to products of graphs:
The Kronecker product of the adjacency matrices of two graphs is the adjacency matrix of the tensor product graph. The Kronecker sum of the adjacency matrices of two graphs is the adjacency matrix of the Cartesian product graph.[8]

## Matrix equations

The Kronecker product can be used to get a convenient representation for some matrix equations. Consider for instance the equation , where A, B and C are given matrices and the matrix X is the unknown. We can use the "vec trick" to rewrite this equation as

${\displaystyle \left(\mathbf {B} ^{\textsf {T}}\otimes \mathbf {A} \right)\,\operatorname {vec} (\mathbf {X} )=\operatorname {vec} (\mathbf {AXB} )=\operatorname {vec} (\mathbf {C} ).}$

Here, vec(X) denotes the vectorization of the matrix X formed by stacking the columns of X into a single column vector.

It now follows from the properties of the Kronecker product that the equation has a unique solution if and only if A and B are nonsingular (Horn & Johnson 1991, Lemma 4.3.1).

If X is row-ordered into the column vector x then AXB can also be written as (Jain 1989, 2.8 Block Matrices and Kronecker Products) .

### Applications

For an example of the application of this formula, see the article on the Lyapunov equation. This formula also comes in handy in showing that the matrix normal distribution is a special case of the multivariate normal distribution. This formula is also useful for representing 2D image processing operations in matrix-vector form.

Another example is when a matrix can be factored as a Hadamard product, then matrix multiplication can be performed faster by using the above formula. This can be applied recursively, as done in the radix-2 FFT and the Fast Walsh-Hadamard transform. Splitting a known matrix into the Hadamard product of two smaller matrices is known as the "nearest Kronecker Product" problem, and can be solved exactly[9] by using the SVD. To split a matrix into the Hadamard product of more than two matrices, in an optimal fashion, is a difficult problem and the subject of ongoing research; some authors cast it as a tensor decomposition problem.[10][11]

In conjunction with the least squares method, the Kronecker product can be used as an accurate solution to the hand eye calibration problem.[12]

## Related matrix operations

Two related matrix operations are the Tracy-Singh and Khatri-Rao products which operate on partitioned matrices. Let the matrix A be partitioned into the blocks Aij and matrix B into the blocks Bkl with of course , , and .

### Tracy-Singh product

The Tracy-Singh product is defined as[13][14]

${\displaystyle \mathbf {A} \circ \mathbf {B} =\left(\mathbf {A} _{ij}\circ \mathbf {B} \right)_{ij}=\left(\left(\mathbf {A} _{ij}\otimes \mathbf {B} _{kl}\right)_{kl}\right)_{ij}}$

which means that the (ij)-th subblock of the product is the matrix , of which the (kℓ)-th subblock equals the matrix . Essentially the Tracy-Singh product is the pairwise Kronecker product for each pair of partitions in the two matrices.

For example, if A and B both are partitioned matrices e.g.:

${\displaystyle \mathbf {A} =\left[{\begin{array}{c | c}\mathbf {A} _{11}&\mathbf {A} _{12}\\\hline \mathbf {A} _{21}&\mathbf {A} _{22}\end{array}}\right]=\left[{\begin{array}{c c | c}1&2&3\\4&5&6\\\hline 7&8&9\end{array}}\right],\quad \mathbf {B} =\left[{\begin{array}{c | c}\mathbf {B} _{11}&\mathbf {B} _{12}\\\hline \mathbf {B} _{21}&\mathbf {B} _{22}\end{array}}\right]=\left[{\begin{array}{c | c c}1&4&7\\\hline 2&5&8\\3&6&9\end{array}}\right],}$

we get:

{\displaystyle {\begin{aligned}\mathbf {A} \circ \mathbf {B} =\left[{\begin{array}{c | c}\mathbf {A} _{11}\circ \mathbf {B} &\mathbf {A} _{12}\circ \mathbf {B} \\\hline \mathbf {A} _{21}\circ \mathbf {B} &\mathbf {A} _{22}\circ \mathbf {B} \end{array}}\right]={}&\left[{\begin{array}{c | c | c | c}\mathbf {A} _{11}\otimes \mathbf {B} _{11}&\mathbf {A} _{11}\otimes \mathbf {B} _{12}&\mathbf {A} _{12}\otimes \mathbf {B} _{11}&\mathbf {A} _{12}\otimes \mathbf {B} _{12}\\\hline \mathbf {A} _{11}\otimes \mathbf {B} _{21}&\mathbf {A} _{11}\otimes \mathbf {B} _{22}&\mathbf {A} _{12}\otimes \mathbf {B} _{21}&\mathbf {A} _{12}\otimes \mathbf {B} _{22}\\\hline \mathbf {A} _{21}\otimes \mathbf {B} _{11}&\mathbf {A} _{21}\otimes \mathbf {B} _{12}&\mathbf {A} _{22}\otimes \mathbf {B} _{11}&\mathbf {A} _{22}\otimes \mathbf {B} _{12}\\\hline \mathbf {A} _{21}\otimes \mathbf {B} _{21}&\mathbf {A} _{21}\otimes \mathbf {B} _{22}&\mathbf {A} _{22}\otimes \mathbf {B} _{21}&\mathbf {A} _{22}\otimes \mathbf {B} _{22}\end{array}}\right]\\={}&\left[{\begin{array}{c c | c c c c | c | c c}1&2&4&7&8&14&3&12&21\\4&5&16&28&20&35&6&24&42\\\hline 2&4&5&8&10&16&6&15&24\\3&6&6&9&12&18&9&18&27\\8&10&20&32&25&40&12&30&48\\12&15&24&36&30&45&18&36&54\\\hline 7&8&28&49&32&56&9&36&63\\\hline 14&16&35&56&40&64&18&45&72\\21&24&42&63&48&72&27&54&81\end{array}}\right].\end{aligned}}}

### Khatri-Rao product

The Khatri-Rao product is defined as[15][16]

${\displaystyle \mathbf {A} \ast \mathbf {B} =\left(\mathbf {A} _{ij}\otimes \mathbf {B} _{ij}\right)_{ij}}$

in which the ij-th block is the sized Kronecker product of the corresponding blocks of A and B, assuming the number of row and column partitions of both matrices is equal. The size of the product is then . Proceeding with the same matrices as the previous example we obtain:

${\displaystyle \mathbf {A} \ast \mathbf {B} =\left[{\begin{array}{c | c}\mathbf {A} _{11}\otimes \mathbf {B} _{11}&\mathbf {A} _{12}\otimes \mathbf {B} _{12}\\\hline \mathbf {A} _{21}\otimes \mathbf {B} _{21}&\mathbf {A} _{22}\otimes \mathbf {B} _{22}\end{array}}\right]=\left[{\begin{array}{c c | c c}1&2&12&21\\4&5&24&42\\\hline 14&16&45&72\\21&24&54&81\end{array}}\right].}$

This is a submatrix of the Tracy-Singh product of the two matrices (each partition in this example is a partition in a corner of the Tracy-Singh product) and also may be called the block Kronecker product.

A column-wise Kronecker product of two matrices may also be called the Khatri-Rao product. This product assumes the partitions of the matrices are their columns. In this case , , and for each j: . The resulting product is a matrix of which each column is the Kronecker product of the corresponding columns of A and B. Using the matrices from the previous examples with the columns partitioned:

${\displaystyle \mathbf {C} =\left[{\begin{array}{c | c | c}\mathbf {C} _{1}&\mathbf {C} _{2}&\mathbf {C} _{3}\end{array}}\right]=\left[{\begin{array}{c | c | c}1&2&3\\4&5&6\\7&8&9\end{array}}\right],\quad \mathbf {D} =\left[{\begin{array}{c | c | c }\mathbf {D} _{1}&\mathbf {D} _{2}&\mathbf {D} _{3}\end{array}}\right]=\left[{\begin{array}{c | c | c }1&4&7\\2&5&8\\3&6&9\end{array}}\right],}$

so that:

${\displaystyle \mathbf {C} \ast \mathbf {D} =\left[{\begin{array}{c | c | c }\mathbf {C} _{1}\otimes \mathbf {D} _{1}&\mathbf {C} _{2}\otimes \mathbf {D} _{2}&\mathbf {C} _{3}\otimes \mathbf {D} _{3}\end{array}}\right]=\left[{\begin{array}{c | c | c }1&8&21\\2&10&24\\3&12&27\\4&20&42\\8&25&48\\12&30&54\\7&32&63\\14&40&72\\21&48&81\end{array}}\right].}$

This column-wise version of the Khatri-Rao product is useful in linear algebra approaches to data analytical processing[17] and in optimizing the solution of inverse problems dealing with a diagonal matrix.[18][19]

The alternative concept of the matrix product, which uses row-wise splitting of matrices with a given quantity of rows, was proposed by V. Slyusar in 1996.[20][21][22]

This matrix operation was named the "face-splitting product" of matrices or the "transposed Khatri-Rao product". This type of operation is based on row-by-row Kronecker products of two matrices. Using the matrices from the previous examples with the rows partitioned:

${\displaystyle \mathbf {C} =\left[{\begin{array}{c c}\mathbf {C} _{1}\\\hline \mathbf {C} _{2}\\\hline \mathbf {C} _{3}\\\end{array}}\right]=\left[{\begin{array}{c c c}1&2&3\\\hline 4&5&6\\\hline 7&8&9\end{array}}\right],\quad \mathbf {D} =\left[{\begin{array}{c }\mathbf {D} _{1}\\\hline \mathbf {D} _{2}\\\hline \mathbf {D} _{3}\\\end{array}}\right]=\left[{\begin{array}{c c c }1&4&7\\\hline 2&5&8\\\hline 3&6&9\end{array}}\right],}$

can be get:

${\displaystyle \mathbf {C} \bullet \mathbf {D} =\left[{\begin{array}{c }\mathbf {C} _{1}\otimes \mathbf {D} _{1}\\\hline \mathbf {C} _{2}\otimes \mathbf {D} _{2}\\\hline \mathbf {C} _{3}\otimes \mathbf {D} _{3}\\\end{array}}\right]=\left[{\begin{array}{c c c c c c c c c }1&4&7&2&8&14&3&12&21\\\hline 8&20&32&10&25&40&12&30&48\\\hline 21&42&63&24&48&72&27&54&81\end{array}}\right].}$

#### Main properties

1. Transpose:[20]
${\displaystyle \left(\mathbf {A} \bullet \mathbf {B} \right)^{\textsf {T}}={\textbf {A}}^{\textsf {T}}\ast \mathbf {B} ^{\textsf {T}}}$
2. The mixed-product property:[21][23]
${\displaystyle (\mathbf {A} \bullet \mathbf {B} )\left(\mathbf {A} ^{\textsf {T}}\ast \mathbf {B} ^{\textsf {T}}\right)=\left(\mathbf {A} \mathbf {A} ^{\textsf {T}}\right)\circ \left(\mathbf {B} \mathbf {B} ^{\textsf {T}}\right)}$,

where ${\displaystyle \circ }$ denotes the Hadamard product.

${\displaystyle (\mathbf {A} \bullet \mathbf {B} )(\mathbf {C} \ast \mathbf {D} )=(\mathbf {A} \mathbf {C} )\circ (\mathbf {B} \mathbf {D} )}$.

## Notes

1. ^ G. Zehfuss (1858), "Ueber eine gewisse Determinante", Zeitschrift für Mathematik und Physik, 3: 298-301.
2. ^ H. V. Henderson; S. R. Searle (1980). "The vec-permutation matrix, the vec operator and Kronecker products: A review" (PDF). Linear and Multilinear Algebra. 9 (4): 271-288. doi:10.1080/03081088108817379. hdl:1813/32747.
3. ^ Charles F. Van Loan (2000). "The ubiquitous Kronecker product". Journal of Computational and Applied Mathematics. 123 (1-2): 85-100. doi:10.1016/s0377-0427(00)00393-9.
4. ^ Langville, Amy N.; Stewart, William J. (June 1, 2004). "The Kronecker product and stochastic automata networks". Journal of Computational and Applied Mathematics. 167 (2): 429-447. doi:10.1016/j.cam.2003.10.010.
5. ^ MacEdo, Hugo Daniel; Oliveira, José Nuno (2013). "Typing linear algebra: A biproduct-oriented approach". Science of Computer Programming. 78 (11): 2160-2191. CiteSeerX 10.1.1.747.2083. doi:10.1016/j.scico.2012.07.012.
6. ^ J. W. Brewer (1969). "A Note on Kronecker Matrix Products and Matrix Equation Systems". SIAM Journal on Applied Mathematics. 17 (3): 603-606. doi:10.1137/0117057.
7. ^ Dummit, David S.; Foote, Richard M. (1999). Abstract Algebra (2 ed.). New York: John Wiley and Sons. pp. 401-402. ISBN 978-0-471-36857-1.
8. ^ See answer to Exercise 96, D. E. Knuth: "Pre-Fascicle 0a: Introduction to Combinatorial Algorithms", zeroth printing (revision 2), to appear as part of D.E. Knuth: The Art of Computer Programming Vol. 4A
9. ^ Van Loan, C; Pitsianis, N (1992). Approximation with Kronecker Products. Ithaca, NY: Cornell University.
10. ^ Kronecker product approximation with multiple factor matrices via the tensor product algorithm, Wu et al., 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
11. ^ Learning fast dictionaries for sparse representations using low-rank tensor decompositions, Dantas et al., LVA/ICA 2018 - 14th International Conference on Latent Variable Analysis and Signal Separation, Jul 2018, Guildford, United Kingdom. pp.456-466, ff10.1007/978-3-319-93764-9_42
12. ^ Algo Li, et al. "Simultaneous robot-world and hand-eye calibration using dual-quaternions and Kronecker product." International Journal of the Physical Sciences Vol. 5(10), pp. 1530-1536, 4 September 2010.
13. ^ Tracy, D. S.; Singh, R. P. (1972). "A New Matrix Product and Its Applications in Matrix Differentiation". Statistica Neerlandica. 26 (4): 143-157. doi:10.1111/j.1467-9574.1972.tb00199.x.
14. ^ Liu, S. (1999). "Matrix Results on the Khatri-Rao and Tracy-Singh Products". Linear Algebra and Its Applications. 289 (1-3): 267-277. doi:10.1016/S0024-3795(98)10209-4.
15. ^ Khatri C. G., C. R. Rao (1968). "Solutions to some functional equations and their applications to characterization of probability distributions". Sankhya. 30: 167-180.
16. ^ Zhang X; Yang Z; Cao C. (2002), "Inequalities involving Khatri-Rao products of positive semi-definite matrices", Applied Mathematics E-notes, 2: 117-124
17. ^ See e.g. H.D. Macedo and J.N. Oliveira. A linear algebra approach to OLAP. Formal Aspects of Computing, 27(2):283-307, 2015.
18. ^ Lev-Ari, Hanoch (2005-01-01). "Efficient Solution of Linear Matrix Equations with Application to Multistatic Antenna Array Processing". Communications in Information & Systems. 05 (1): 123-130. doi:10.4310/CIS.2005.v5.n1.a5. ISSN 1526-7555.
19. ^ Masiero, B.; Nascimento, V. H. (2017-05-01). "Revisiting the Kronecker Array Transform". IEEE Signal Processing Letters. 24 (5): 525-529. doi:10.1109/LSP.2017.2674969. ISSN 1070-9908.
20. ^ a b Slyusar, V. I. (1997-05-20). "Analytical model of the digital antenna array on a basis of face-splitting matrix products" (PDF). Proc. ICATT- 97, Kyiv: 108-109.
21. ^ a b Slyusar, V. I. (1999). "A Family of Face Products of Matrices and its Properties" (PDF). Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz. 35 (3): 379-384. doi:10.1007/BF02733426.
22. ^ Slyusar, V. I. (2003). "Generalized face-products of matrices in models of digital antenna arrays with nonidentical channels" (PDF). Radioelectronics and Communications Systems. 46 (10): 9-17.
23. ^ Slyusar, V. I. (1997-09-15). "New operations of matrices product for applications of radars" (PDF). Proc. Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED-97), Lviv.: 73-74.