[section] [section]

[section] [section] [section]

Optimal Predictors: A Bayesian Notion of Approximation Algorithm

AbstractThe concept of an "approximation algorithm" is usually only applied to optimization problems since in optimization problems the performance of the algorithm on any given input is a continuous parameter. We introduce a new concept of approximation applicable to decision problems and functions, inspired by Bayesian probability. From the perspective of a Bayesian reasoner with limited computational resources, the answer to a problem that cannot be solved exactly is uncertain and therefore should be described by a random variable. It thus should make sense to talk about the expected value of this random variable, an idea we formalize in the language of average-case complexity theory by introducing the concept of "optimal predictor." We show that optimal predictors exhibit many parallels with "classical" probability theory, prove some existence theorems and demonstrate some applications to artificial general intelligence.

[section] [section]

[section] [section] [section]

Introduction

Imagine you are strolling in the city with a friend when a car passes by with the license plate number “7614829”. Your friend proposes a wager, claiming that the number is composite and offering 10 : 1 odds in your favor. Knowing that your friend has no exceptional ability in mental arithmetic and that it’s highly unlikely they saw this car before, you realize they are just guessing. Your mental arithmetic is also insufficient to test the number for primality, but is sufficient to check that \(7614829 \equiv 1 \pmod{3}\) and