In computability theory, the ?-operator, minimization operator, or unbounded search operator searches for the least natural number with a given property. Adding the ?-operator to the five primitive recursive operators makes it possible to define all computable functions.
Suppose that R(y, x_{1}, ..., x_{k}) is a fixed (k+1)-ary relation on the natural numbers. The ?-operator "?y", in either the unbounded or bounded form, is a "number theoretic function" defined from the natural numbers to the natural numbers. However, "?y" contains a predicate over the natural numbers that delivers true when the predicate is satisfied and false when it is not.
The bounded ?-operator appears earlier in Kleene (1952) Chapter IX Primitive Recursive Functions, §45 Predicates, prime factor representation as:
Stephen Kleene notes that any of the six inequality restrictions on the range of the variable y is permitted, i.e. y < z, y z, w < y < z, w < y z, w y < z and w y z. "When the indicated range contains no y such that R(y) [is "true"], the value of the "?y" expression is the cardinal number of the range" (p. 226); this is why the default "z" appears in the definition above. As shown below, the bounded ?-operator "?y_{y<z}" is defined in terms of two primitive recursive functions called the finite sum ? and finite product ?, a predicate function that "does the test" and a representing function that converts {t, f} to {0, 1}.
In Chapter XI §57 General Recursive Functions, Kleene defines the unbounded ?-operator over the variable y in the following manner,
In this instance R itself, or its representing function, delivers 0 when it is satisfied (i.e. delivers true); the function then delivers the number y. No upper bound exists on y, hence no inequality expressions appear in its definition.
For a given R(y) the unbounded ?-operator ?yR(y) (note no requirement for "(Ey)" ) is a partial function. Kleene makes it as a total function instead (cf. p. 317):
The total version of the unbounded ?-operator is studied in higher-order reverse mathematics (Kohlenbach (2005)) in the following form:
where the superscripts mean that n is zeroth-order, f is first-order, and μ is second-order. This axiom gives rise to the Big Five system ACA_{0} when combined with the usual base theory of higher-order reverse mathematics.
(i) In context of the primitive recursive functions, where the search variable y of the ?-operator is bounded, e.g. y < z in the formula below, if the predicate R is primitive recursive (Kleene Proof #E p. 228), then
(ii) In the context of the (total) recursive functions, where the search variable y is unbounded but guaranteed to exist for all values x_{i} of the total recursive predicate R's parameters,
then the five primitive recursive operators plus the unbounded-but-total ?-operator give rise to what Kleene called the "general" recursive functions (i.e. total functions defined by the six recursion operators).
(iii) In the context of the partial recursive functions: Suppose that the relation R holds if and only if a partial recursive function converges to zero. And suppose that that partial recursive function converges (to something, not necessarily zero) whenever ?yR(y, x_{1}, ..., x_{k}) is defined and y is ?yR(y, x_{1}, ..., x_{k}) or smaller. Then the function ?yR(y, x_{1}, ..., x_{k}) is also a partial recursive function.
The ?-operator is used in the characterization of the computable functions as the ? recursive functions.
In constructive mathematics, the unbounded search operator is related to Markov's principle.
The bounded ?-operator can be expressed rather simply in terms of two primitive recursive functions (hereafter "prf") that also are used to define the CASE function--the product-of-terms ? and the sum-of-terms ? (cf Kleene #B page 224). (As needed, any boundary for the variable such as s t or t < z, or 5 < x < 17 etc. is appropriate). For example:
Before we proceed we need to introduce a function ? called "the representing function" of predicate R. Function ? is defined from inputs (t = "truth", f = "falsity") to outputs (0, 1) (note the order!). In this case the input to ?. i.e. {t, f}. is coming from the output of R:
Kleene demonstrates that ?y_{y<z}R(y) is defined as follows; we see the product function ? is acting like a Boolean OR operator, and the sum ? is acting somewhat like a Boolean AND but is producing {0, ?=0} rather than just {1, 0}:
The equation is easier if observed with an example, as given by Kleene. He just made up the entries for the representing function ?(R(y)). He designated the representing functions ?(y) rather than ?(x, y):
y | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7=z |
---|---|---|---|---|---|---|---|---|
?(y) | 1 | 1 | 1 | 0 | 1 | 0 | 0 | |
?(y) = ?_{sy} ?(s) | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
?(y) = ?_{t<y} ?(t) | 1 | 2 | 3 | 3 | 3 | 3 | 3 | 3 |
least y < z such that R(y) is "true": ?(y) = ?y_{y<z}R(y) |
3 |
The unbounded ?-operator--the function ?y--is the one commonly defined in the texts. But the reader may wonder why the unbounded ?-operator is searching for a function R(x, y) to yield zero, rather than some other natural number.
The reason for zero is that the unbounded operator ?y will be defined in terms of the function "product" ? with its index y allowed to "grow" as the ?-operator searches. As noted in the example above, the product ?_{x<y} of a string of numbers ?(x, 0) *, ..., * ?(x, y) yields zero whenever one of its members ?(x, i) is zero:
if any ?(x, i) = 0 where 0is. Thus the ? is acting like a Boolean AND.
The function ?y produces as "output" a single natural number y = {0, 1, 2, 3, ...}. However, inside the operator one of a couple "situations" can appear: (a) a "number-theoretic function" ? that produces a single natural number, or (b) a "predicate" R that produces either {t = true, f = false}. (And, in the context of partial recursive functions Kleene later admits a third outcome: "? = undecided".^{[1]})
Kleene splits his definition of the unbounded ?-operator to handle the two situations (a) and (b). For situation (b), before the predicate R(x, y) can serve in an arithmetic capacity in the product ?, its output {t, f} must first be "operated on" by its representing function ? to yield {0, 1}. And for situation (a) if one definition is to be used then the number theoretic function ? must produce zero to "satisfy" the ?-operator. With this matter settled, he demonstrates with single "Proof III" that either types (a) or (b) together with the five primitive recursive operators yield the (total) recursive functions, with this proviso for a total function:
Kleene also admits a third situation (c) that does not require the demonstration of "for all x a y exists such that ?(x, y)." He uses this in his proof that more total recursive functions exist than can be enumerated; c.f. footnote Total function demonstration.
Kleene's proof is informal and uses an example similar to the first example, but first he casts the ?-operator into a different form that uses the "product-of-terms" ? operating on function ? that yields a natural number n, which can be any natural number, and 0 in the instance when the u-operator's test is "satisfied".
This is subtle. At first glance the equations seem to be using primitive recursion. But Kleene has not provided us with a base step and an induction step of the general form:
To see what is going on, we first have to remind ourselves that we have assigned a parameter (a natural number) to every variable x_{i}. Second, we do see a successor-operator at work iterating y (i.e. the y' ). And third, we see that the function ?y _{y<z}?(y, x) is just producing instances of ?(y,x) i.e. ?(0,x), ?(1,x), ... until an instance yields 0. Fourth, when an instance ?(n, x) yields 0 it causes the middle term of ?, i.e. v = ?(x, y' ) to yield 0. Finally, when the middle term v = 0, ?y_{y<z}?(y) executes line (iii) and "exits". Kleene's presentation of equations (ii) and (iii) have been exchanged to make this point that line (iii) represents an exit--an exit taken only when the search successfully finds a y to satisfy ?(y) and the middle product-term ?(x, y' ) is 0; the operator then terminates its search with ?(z' , 0, y) = y.
For the example Kleene "...consider[s] any fixed values of (x_{i}, ..., x_{n}) and write[s] simply '?(y)' for '?(x_{i}, ..., x_{n}), y)'":
y | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | etc. |
---|---|---|---|---|---|---|---|---|---|
?(y) | 3 | 1 | 2 | 0 | 9 | 0 | 1 | 5 | . . . |
?(y) = ?_{sy}?(s) | 1 | 3 | 3 | 6 | 0 | 0 | 0 | 0 | . . . |
? | |||||||||
least y < z such that R(y) is "true": ?(y) = ?y_{y<z}R(y) |
3 |
Both Minsky (1967) p. 21 and Boolos-Burgess-Jeffrey (2002) p. 60-61 provide definitions of the ?-operator as an abstract machine; see footnote Alternative definitions of ?.
The following demonstration follows Minsky without the "peculiarity" mentioned in the footnote. The demonstration will use a "successor" counter machine model closely related to the Peano Axioms and the primitive recursive functions. The model consists of (i) a finite state machine with a TABLE of instructions and a so-called 'state register' that we will rename "the Instruction Register" (IR), (ii) a few "registers" each of which can contain only a single natural number, and (iii) an instruction set of four "commands" described in the following table:
Instruction | Mnemonic | Action on register(s) "r" | Action on Instruction Register, IR |
---|---|---|---|
CLeaR register | CLR ( r ) | 0 -> r | [ IR ] + 1 -> IR |
INCrement register | INC ( r ) | [ r ] + 1 -> r | [ IR ] + 1 -> IR |
Jump if Equal | JE (r_{1}, r_{2}, z) | none | IF [ r_{1} ] = [ r_{2} ] THEN z -> IR ELSE [ IR ] + 1 -> IR |
Halt | H | none | [ IR ] -> IR |
The algorithm for the minimization operator ?y[?(x, y)] will, in essence, create a sequence of instances of the function ?(x, y) as the value of parameter y (a natural number) increases; the process will continue (see Note + below) until a match occurs between the output of function ?(x, y) and some pre-established number (usually 0). Thus the evaluation of ?(x, y) requires, at the outset, assignment of a natural number to each of its variables x and an assignment of a "match-number" (usually 0) to a register "w", and a number (usually 0) to register y.
In the following we are assuming that the Instruction Register (IR) encounters the ?y "routine" at instruction number "n". Its first action will be to establish a number in a dedicated "w" register--an "example of" the number that function ?(x, y) must produce before the algorithm can terminate (classically this is the number zero, but see the footnote about the use of numbers other than zero). The algorithm's next action at instructiton "n+1" will be to clear the "y" register -- "y" will act as an "up-counter" that starts from 0. Then at instruction "n+2" the algorithm evaluates its function ?(x, y) -- we assume this takes j instructions to accomplish--and at the end of its evaluation ?(x, y) deposits its output in register "?". At the (n+j+3)rd instruction the algorithm compares the number in the "w" register (e.g. 0) to the number in the "?" register--if they are the same the algorithm has succeeded and it escapes through exit; otherwise it increments the contents of the "y" register and loops back with this new y-value to test function ?(x, y) again.
IR | Instruction | Action on register | Action on Instruction Register IR | |
---|---|---|---|---|
n | ?y[?(x, y)]: | CLR ( w ) | 0 -> w | [ IR ] + 1 -> IR |
n+1 | CLR ( y ) | 0 -> y | [ IR ] + 1 -> IR | |
n+2 | loop: | ?(x, y) | ?([x], [y]) -> ? | [ IR ] + j + 1 -> IR |
n+j+3 | JE (?, w, exit) | none | CASE: { IF [ ? ]=[ w ] THEN exit -> IR ELSE [IR] + 1 -> IR } | |
n+j+4 | INC ( y ) | [ y ] + 1 -> y | [ IR ] + 1 -> IR | |
n+j+5 | JE (0, 0, loop) | Unconditional jump | CASE: { IF [ r_{0} ] =[ r_{0} ] THEN loop -> IR ELSE loop -> IR } | |
n+j+6 | exit: | etc. |
What is mandatory if the function is to be a total function is a demonstration by some other method (e.g. induction) that for each and every combination of values of its parameters x_{i} some natural number y will satisfy the ?-operator so that the algorithm that represents the calculation can terminate:
For an example of what this means in practice see the examples at mu recursive functions--even the simplest truncated subtraction algorithm "x - y = d" can yield, for the undefined cases when x < y, (1) no termination, (2) no numbers (i.e. something wrong with the format so the yield is not considered a natural number), or (3) deceit: wrong numbers in the correct format. The "proper" subtraction algorithm requires careful attention to all the "cases"
But even when the algorithm has been shown to produce the expected output in the instances {(0, 0), (1, 0), (0, 1), (2, 1), (1, 1), (1, 2)}, we are left with an uneasy feeling until we can devise a "convincing demonstration" that the cases (x, y) = (n, m) all yield the expected results. To Kleene's point: is our "demonstration" (i.e. the algorithm that is our demonstration) convincing enough to be considered effective?
The unbounded ?-operator is defined by Minsky (1967) p. 210 but with a peculiar flaw: the-operator will not yield t=0 when its predicate (the IF-THEN-ELSE test) is satisfied; rather it yields t=2. In Minsky's version the counter is "t", and the function ?(t, x) deposits its number in register ?. In the usual ? definition register w will contain 0, but Minsky observes that it can contain any number k. Minsky's instruction set is equivalent to the following where "JNE" = Jump to z if Not Equal:
IR | Instruction | Action on register | Action on Instruction Register, IR | |
---|---|---|---|---|
n | ?y?( x ): | CLR ( w ) | 0 -> w | [ IR ] + 1 -> IR |
n+ 1 | CLR ( t ) | 0 -> t | [ IR ] + 1 -> IR | |
n+2 | loop: | ? (y, x) | ?( [ t ], [ x ] ) -> ? | [ IR ] + j + 1 -> IR |
n+j+3 | INC ( t ) | [ t ] + 1 -> t | [ IR ] + 1 -> IR | |
n+j+4 | JNE (?, w, loop) | none | CASE: { IF [?] ? [w] THEN "exit" -> IR ELSE [IR] + 1 -> IR } | |
n+j+5 | INC ( t ) | [ t ] + 1 -> t | [ IR ] + 1 -> IR | |
n+j+6 | exit: | etc. |
The unbounded ?-operator is also defined by Boolos-Burgess-Jeffrey (2002) p. 60-61 for a counter machine with an instruction set equivalent to the following:
In this version the counter "y" is called "r2", and the function f( x, r2 ) deposits its number in register "r3". Perhaps the reason Boolos-Burgess-Jeffrey clear r3 is to facilitate an unconditional jump to loop; this is often done by use of a dedicated register "0" that contains "0":
IR | Instruction | Action on register | Action on Instruction Register, IR | |
---|---|---|---|---|
n | ?_{r2}[f(x, r_{2})]: | CLR ( r_{2} ) | 0 -> r_{2} | [ IR ] + 1 -> IR |
n+1 | loop: | f(y, x) | f( [ t ], [ x ] ) -> r_{3} | [ IR ] + j + 1 -> IR |
n+2 | JZ ( r_{3}, exit ) | none | IF [ r_{3} ] = 0 THEN exit -> IR ELSE [ IR ] + 1 -> IR | |
n+j+3 | CLR ( r_{3} ) | 0 -> r_{3} | [ IR ] + 1 -> IR | |
n+j+4 | INC ( r_{2} ) | [ r_{2} ] + 1 -> r_{2} | [ IR ] + 1 -> IR | |
n+j+5 | JZ ( r_{3}, loop) | CASE: { IF [ r_{3} ] = 0 THEN loop -> IR ELSE [IR] + 1 -> IR } | ||
n+j+6 | exit: | etc. |