 Indicator Function
Get Indicator Function essential facts below. View Videos or join the Indicator Function discussion. Add Indicator Function to your PopFlock.com topic list for future reference or share this resource on social media.
Indicator Function A three-dimensional plot of an indicator function, shown over a square two-dimensional domain (set X): the "raised" portion overlays those two-dimensional points which are members of the "indicated" subset (A).

In mathematics, an indicator function or a characteristic function of a subset A of a set X is a function defined from X to the two-element set $\{0,1\}$ , typically denoted as $\mathbf {1} _{A}\colon X\to \{0,1\}$ , and it indicates whether an element in X belongs to A; $\mathbf {1} _{A}(x)=1$ if an element $x$ in X belongs to A, and $\mathbf {1} _{A}(x)=0$ if $x$ does not belong to A. It is also denoted by $I_{A}$ to emphasize the fact that this function identifies the subset A of X.

In other contexts, such as computer science, this would more often be described as a boolean predicate function (to test set inclusion).

The Dirichlet function is an example of an indicator function and is the indicator of the rationals.

## Definition

The indicator function of a subset A of a set X is a function

$\mathbf {1} _{A}\colon X\to \{0,1\}$ defined as

$\mathbf {1} _{A}(x):={\begin{cases}1~&{\text{ if }}~x\in A~,\\0~&{\text{ if }}~x\notin A~.\end{cases}}$ The Iverson bracket provides the equivalent notation, $[x\in A]$ or to be used instead of $\mathbf {1} _{A}(x)$ .

The function $\mathbf {1} _{A}$ is sometimes denoted $I_{A}$ , $\chi _{A}$ , KA or even just $A$ .[a][b]

## Notation and terminology

The notation $\chi _{A}$ is also used to denote the characteristic function in convex analysis, which is defined as if using the reciprocal of the standard definition of the indicator function.

A related concept in statistics is that of a dummy variable. (This must not be confused with "dummy variables" as that term is usually used in mathematics, also called a bound variable.)

The term "characteristic function" has an unrelated meaning in classic probability theory. For this reason, traditional probabilists use the term indicator function for the function defined here almost exclusively, while mathematicians in other fields are more likely to use the term characteristic function[a] to describe the function that indicates membership in a set.

In fuzzy logic and modern many-valued logic, predicates are the characteristic functions of a probability distribution. That is, the strict true/false valuation of the predicate is replaced by a quantity interpreted as the degree of truth.

## Basic properties

The indicator or characteristic function of a subset A of some set X maps elements of X to the range {0, 1}.

This mapping is surjective only when A is a non-empty proper subset of X. If A ? X, then 1A = 1. By a similar argument, if A ? ? then 1A = 0.

In the following, the dot represents multiplication, 1·1 = 1, 1·0 = 0 etc. "+" and "-" represent addition and subtraction. "$\cap$ " and "$\cup$ " is intersection and union, respectively.

If $A$ and $B$ are two subsets of $X$ , then

$\mathbf {1} _{A\cap B}=\min\{\mathbf {1} _{A},\mathbf {1} _{B}\}=\mathbf {1} _{A}\cdot \mathbf {1} _{B},$ $\mathbf {1} _{A\cup B}=\max\{{\mathbf {1} _{A},\mathbf {1} _{B}}\}=\mathbf {1} _{A}+\mathbf {1} _{B}-\mathbf {1} _{A}\cdot \mathbf {1} _{B},$ and the indicator function of the complement of $A$ i.e. $A^{C}$ is:

$\mathbf {1} _{A^{\complement }}=1-\mathbf {1} _{A}$ .

More generally, suppose $A_{1},\dotsc ,A_{n}$ is a collection of subsets of X. For any $x\in X$ :

$\prod _{k\in I}(1-\mathbf {1} _{A_{k}}(x))$ is clearly a product of 0s and 1s. This product has the value 1 at precisely those $x\in X$ that belong to none of the sets $A_{k}$ and is 0 otherwise. That is

$\prod _{k\in I}(1-\mathbf {1} _{A_{k}})=\mathbf {1} _{X-\bigcup _{k}A_{k}}=1-\mathbf {1} _{\bigcup _{k}A_{k}}.$ Expanding the product on the left hand side,

$\mathbf {1} _{\bigcup _{k}A_{k}}=1-\sum _{F\subseteq \{1,2,\dotsc ,n\}}(-1)^{|F|}\mathbf {1} _{\bigcap _{F}A_{k}}=\sum _{\emptyset \neq F\subseteq \{1,2,\dotsc ,n\}}(-1)^{|F|+1}\mathbf {1} _{\bigcap _{F}A_{k}}$ where $|F|$ is the cardinality of F. This is one form of the principle of inclusion-exclusion.

As suggested by the previous example, the indicator function is a useful notational device in combinatorics. The notation is used in other places as well, for instance in probability theory: if X is a probability space with probability measure $\operatorname {P}$ and A is a measurable set, then $\mathbf {1} _{A}$ becomes a random variable whose expected value is equal to the probability of A:

$\operatorname {E} (\mathbf {1} _{A})=\int _{X}\mathbf {1} _{A}(x)\,d\operatorname {P} =\int _{A}d\operatorname {P} =\operatorname {P} (A)$ .

This identity is used in a simple proof of Markov's inequality.

In many cases, such as order theory, the inverse of the indicator function may be defined. This is commonly called the generalized Möbius function, as a generalization of the inverse of the indicator function in elementary number theory, the Möbius function. (See paragraph below about the use of the inverse in classical recursion theory.)

## Mean, variance and covariance

Given a probability space $\textstyle (\Omega ,{\mathcal {F}},\operatorname {P} )$ with $A\in {\mathcal {F}}$ , the indicator random variable $\mathbf {1} _{A}\colon \Omega \rightarrow \mathbb {R}$ is defined by $\mathbf {1} _{A}(\omega )=1$ if $\omega \in A,$ otherwise $\mathbf {1} _{A}(\omega )=0.$ Mean
$\operatorname {E} (\mathbf {1} _{A}(\omega ))=\operatorname {P} (A)$ (also called "Fundamental Bridge").
Variance
$\operatorname {Var} (\mathbf {1} _{A}(\omega ))=\operatorname {P} (A)(1-\operatorname {P} (A))$ Covariance
$\operatorname {Cov} (\mathbf {1} _{A}(\omega ),\mathbf {1} _{B}(\omega ))=\operatorname {P} (A\cap B)-\operatorname {P} (A)\operatorname {P} (B)$ ## Characteristic function in recursion theory, Gödel's and Kleene's representing function

Kurt Gödel described the representing function in his 1934 paper "On undecidable propositions of formal mathematical systems":

"There shall correspond to each class or relation R a representing function ?(x1, ... xn) = 0 if R(x1, ... xn) and ?(x1, ... xn) = 1 if ¬R(x1, ... xn)."(the "¬" indicates logical inversion, i.e. "NOT")

Kleene (1952) offers up the same definition in the context of the primitive recursive functions as a function ? of a predicate P takes on values 0 if the predicate is true and 1 if the predicate is false.

For example, because the product of characteristic functions ?1*?2* ... *?n = 0 whenever any one of the functions equals 0, it plays the role of logical OR: IF ?1 = 0 OR ?2 = 0 OR ... OR ?n = 0 THEN their product is 0. What appears to the modern reader as the representing function's logical inversion, i.e. the representing function is 0 when the function R is "true" or satisfied", plays a useful role in Kleene's definition of the logical functions OR, AND, and IMPLY (p. 228), the bounded- (p. 228) and unbounded- (p. 279 ff) mu operators (Kleene (1952)) and the CASE function (p. 229).

## Characteristic function in fuzzy set theory

In classical mathematics, characteristic functions of sets only take values 1 (members) or 0 (non-members). In fuzzy set theory, characteristic functions are generalized to take value in the real unit interval [0, 1], or more generally, in some algebra or structure (usually required to be at least a poset or lattice). Such generalized characteristic functions are more usually called membership functions, and the corresponding "sets" are called fuzzy sets. Fuzzy sets model the gradual change in the membership degree seen in many real-world predicates like "tall", "warm", etc.

## Derivatives of the indicator function

A particular indicator function is the Heaviside step function. The Heaviside step function H(x) is the indicator function of the one-dimensional positive half-line, i.e. the domain [0, ?). The distributional derivative of the Heaviside step function is equal to the Dirac delta function, i.e.

$\delta (x)={\tfrac {dH(x)}{dx}}$ with the following property:

$\int _{-\infty }^{\infty }f(x)\,\delta (x)dx=f(0).$ The derivative of the Heaviside step function can be seen as the inward normal derivative at the boundary of the domain given by the positive half-line. In higher dimensions, the derivative naturally generalises to the inward normal derivative, while the Heaviside step function naturally generalises to the indicator function of some domain D. The surface of D will be denoted by S. Proceeding, it can be derived that the inward normal derivative of the indicator gives rise to a 'surface delta function', which can be indicated by ?S(x):

$\delta _{S}(\mathbf {x} )=-\mathbf {n} _{x}\cdot \nabla _{x}\mathbf {1} _{\mathbf {x} \in D}$ where n is the outward normal of the surface S. This 'surface delta function' has the following property:

$-\int _{\mathbb {R} ^{n}}f(\mathbf {x} )\,\mathbf {n} _{x}\cdot \nabla _{x}\mathbf {1} _{\mathbf {x} \in D}\;d^{n}\mathbf {x} =\oint _{S}\,f(\mathbf {\beta } )\;d^{n-1}\mathbf {\beta } .$ By setting the function f equal to one, it follows that the inward normal derivative of the indicator integrates to the numerical value of the surface area S.