No Title Found

AbstractNo Abstract Found

Overview

[motivation]

Heuristic strategies are interesting to study because they tend to uncover some interesting links between an optimal strategy and the immediate step. Good heuristics have an intuitive rationale as why the heuristic is constructed that way.

In this sense, the objective of a heuristic may not only be to yield an optimal solution quickly. [See e.g. neuwirth].

It is fair to expect that a tailored-heuristic to one configuration of the rules may not perform well in another configuration. This is the defect of tailored heuristics. (Show some examples)

See (Pepperdine {2010}) for a list of heuristic functions.

[tie-breaker policy]

All known heuristic strategies work on the partition of the remaining secrets by a candidate guess. Notations: Let \(S\) be the set of remaining possibilities. Let \(\tilde{S}\) be the set of remaining possibilities after the guess, which is a random variable depending on the guess and the secret. Let \(n = \#\{S\}\) be the number of remaining possibilities, and let \(\tilde{n} = \#\{\tilde{S}\}\) be the number of remaining possibilities after a guess. Let a guess partition the remaining possibilities into \(r\) cells \(V_1, \ldots, V_r\), where the number of secrets in cell \(i\) is \(n_i = \#\{V_i\}\). Let \(x\) be the unknown secret.

The Min-Max heuristic

Knuth (Knuth {1976}) published the first systematic paper on Mastermind, where he introduced a heuristic strategy aiming at minimizing the worst-case number of remaining possibilities.

The same heuristic function first appeared in (\(\aleph_0\) {1971}) for the Bulls and cows game, but was not elaborated.

Rationale: fewer remaining possibilities is better. We want to play safe and reduce the worst-case number of remaining possibilities.

Note that this strategy does not need an assumption on the distribution of the secret. [see neu] (This, e.g. is suitable in mastermind with lie (or dynamic mastermind.))

Note: If two guesses yield the same worst-case partition, the second-to-worst partition size is compared, etc.

The Min-Average heuristic

Rationale: fewer remaining possibilities is better. We want to minimize the expected partition size (number of remaining possibilities).

E[] = _i=1^r P(x V_i) # { V_i }

The objective function to minimize is

E[] = _i=1^r n_i = _i=1^r n_i^2 .

Now we assume each remaining possibility is equally likely to be the secret. Then the above is transformed to

h(P) = _i=1^r n_i^2 .

Minimizing this expectation is equivalent to minimizing the heuristic function

[Gini Index]

-1 .

A correction for perfect match is useful. If the guess is among the remaining possibilities, then one of the responses is the perfect match, which accounts for a partition of size 1. However, we don’t need to make any further guesses in this case. So this partition should be excluded when calculating the expected remaining count. This can be corrected by applying the following correction term:

The Max-Entropy heuristic

Rationale: we want a guess to provide as much information as possible as to determine what is the secret.

H = -_i .

The entropy heuristic was first introduced by Neuwirth (Neuwirth {1982}) for Mastermind and by Larmouth (\(\aleph_0\) {1971}) for Bulls and cows. It is a theoretically advanced heuristic that scores a guess by the “amount of information” brought by its partitioning of the potential secret set. To be precise, this heuristic aims to maximize the entropy of the partition, defined as

The base of the logarithm is not specified but it does not impact the choice.

h(P) = _i n_i n_i .

Rearranging terms, it can be written as \[\begin{aligned} H &= - \frac{1}{n} \left[ \sum_i n_i (\log n_i - \log n ) \right] \notag \\ &= - \frac{1}{n} \left( \sum_i n_i \log n_i \right) + \log n . \notag\end{aligned}\] When \(n\) is fixed, maximizing \(H\) is equivalent to minimizing the heuristic function

If we loosely interpret \(\log n_i\) as an estimate of the number of further guesses needed for a partition of size \(n_i\), then we can interpret the heuristic function (when divided by \(n\)) as an estimate of the expected number of further guesses needed.

[Note: floating point precision?]

Heeffer (Heeffer {2007}) tested various heuristic algorithms on the Mastermind game with 5 pegs and 8 colors, and found the entropy heuristic to perform the best.

See Wikipedia.

Each feedback from a guess can be thought of as a “alphabet” For p4c10n, there are 14 alphabets, but some letters are more likely to follow certain letters than others. However, such likelyhood depends on the guess chosen.

For example, if a guess partitions the possibility set into discrete partition, then all letters in that alphabet

If alphabet is equally likely, then the entropy is maximized.

***************

Why entropy heuristic doesn’t yield best (worst step) and (average step)?

The apparent underperformance of the entropy heuristic could be explained by noting the fact that there is a distinction between determining the secret and revealing the secret. Suppose we are left with 2 possibilities: 5678 and 7890. We can determine the secret with one guess (e.g. 5678). However, to actually reveal the secret, we need to make an extra guess (7890) if 5689 returns . In total we need maximum 2 guesses and average 1.5 guesses.

However, from a information theory’s perspective, the extra guess is totally redundant in that there is no uncertainty of the outcome: we know for sure that we will get when we guess . In fact, when we guess 5678, the entropy of the resulting partition {(5678:4A0B),(7890:0A2B)} is zero (ignoring the constant denominator), which means uncertainty removed.

The extra step to reveal the secret is necessary in the traditional human game because otherwise it’s difficult to judge that the code breaker wins. On the other hand, the human game rules could be slightly modified to remove the need for the extra guess. Instead of required to reveal the guess with a 4A0B feedback, the codebreaker is required to assert the guess after a number of rounds. If the assertion is correct, he wins; if the assertion is wrong, he loses. The number of guesses one needs before making an assertion is equal to the number of steps one needs to determine the secret, and this number is consistent with an information-theory perspective.

h’(P) = _i n_i n_i - (2 2) () .

To cope with subtle discrepancy of determining and revealing the secret, the entropy heuristic could be amended to distinguish the difference, though in this case the theory is not that sound. See [taiwan wang you] For example, Larmouth (\(\aleph_0\) {1971}) used the following entropy heuristic with correction:

The correction term applies when the partition contains . However, how the coefficient \((2 \log 2)\) is derived is unclear.

Another issue with the entropy heuristic is that when computing the entropy, it only depends on the probablity of each partition (i.e. the size of each partition). This is because in entropy theory, it assumes that we know absolutely nothing about the underlying random variable (the secret), except which partition it resides in. Under this assumption, two partitions with the same size gives the same amount of information; for example, if partition A and partiton B both contain 3 possibilities, then if either case turns out to contain the secret, then we are equipped with the knowledge that we have 3 possibilities left, without any more kn