[section] [section]

[section] [section] [section]

Algorithms

\({{\{ 0, 1 \}^*}}\) is the set of all finite binary strings (words). For any \(x \in {{\{ 0, 1 \}^*}}\), \({\lvert x \rvert}\) is the length of \(x\) i.e. \(x \in {{\{ 0, 1 \}^{{\lvert x \rvert}}}}\). Given \(x,y \in {{\{ 0, 1 \}^*}}\), \(xy\) stands for the concatenation of \(x\) and \(y\) (in particular \({\lvert xy \rvert}={\lvert x \rvert}+{\lvert y \rvert}\)). For any \(n \in {\mathbb{N}}\), \(U^n\) is the uniform probability distribution on \({{\{ 0, 1 \}^{n}}}\).

An encoded set is a set \(X\) together with an injection \(e_X: X \rightarrow {{\{ 0, 1 \}^*}}\) (the encoding) s.t. \(\operatorname{Im}e_X\) is decidable in polynomial time.

We treat standard sets such as \({\mathbb{N}}\) or \({\mathbb{Q}}\) as encoded without specifying the encoding explicitly since it is only important up to a polynomial time transformation (any conventional encoding will suffice).

Given \(n \in {\mathbb{N}}\), encoded sets \(X_1, X_2 \ldots X_n\) and encoded set \(Y\) we use the notation \(A: \prod_{i=1}^n X_i \xrightarrow{alg} Y\) to mean a Turing machine with \(n\) input tapes that halts on every input for which the \(i\)-th tape is initialized to a value in \(\operatorname{Im}e_X\) and produces an output in \(\operatorname{Im}e_Y\). Given \(\{x_i \in X_i\}_{i=1}^n\) the notation \(A(x_1, x_2 \ldots x_n)\) stands for the unique \(y \in Y\) s.t. applying \(A\) to the input composed of \(e_{X_i}(x_i)\) results in output \(e_Y(y)\). We use different input tapes for different components of the input instead of encoding the \(n\)-tuple as a single word in order to allow \(A\) to process some components of the input in time smaller than the length of other components. This involves abuse of notation since a Cartesian product of encoded sets is naturally an encoded set, but hopefully this won’t cause much confusion.

Given \(A: X \xrightarrow{alg} Y\) and \(x \in X\), \(\operatorname{T}_A(x)\) stands for the number of time steps in the computation of \(A(x)\).