The Rationale for
Defeasibility
Pollock (1998) argues that defeasibility is a key aspect of human
cognition (and more generally, of the cognition of any boundedly
rational agent). We start with perceptual inputs and proceed by
inferring beliefs from our current cognitive states (our percepts plus
the beliefs we have previously inferred). A process so described must
satisfy two apparently incompatible desiderata:
-
We must form our beliefs on the basis of partial perceptual input (we
cannot wait until we have a complete representation of our
environment).
-
We must be able to take an unlimited set of perceptual inputs into
account.
According to Pollock, the only way to reconcile these requirements is by
defeasible reasoning. We must adopt beliefs on the basis of a small set
of perceptual inputs, but then must be ready to retract these beliefs in
the face of additional perceptual inputs, whenever these additional
inputs conflict with the initial basis for our beliefs.
Thus, defeasible reasoning appears to have different, but related,
functions (see Sartor 2005, Section 2.2, 2.3). The first function
consists in providing us with provisional beliefs, on which basis we can
reason and act, until we gain information to the contrary.
The second function consists in activating a structured process of
inquiry that consists in drawing pro tanto conclusions, looking
for their defeaters, for defeaters of defeaters, and so on, until stable
outcomes are obtained. This process has two main advantages: (1) it
focuses the inquiry on relevant knowledge, and (2) it continues to
deliver provisional results while the inquiry moves on.
A third function of defeasibility consists in enabling our collective
knowledge structures to persist in time, i.e., to continue to work as a
shared communal asset, even though each of us is exposed to new
information, often challenging the information we already have.
We indeed have two basic strategies for coping with the provisional
nature of human knowledge: revision and defeasibility.
Revision assumes that our general knowledge is a set of universal
laws. When we discover a case where such universal laws lead us to a
false (unacceptable or absurd) conclusion, we must conclude that our
theory (or the subsets of it entailing the false conclusion) has been
falsified, becoming thus unacceptable (Popper 1959). Thus, we must
abandon some propositions in that theory and replace them with new
universal propositions, from which the false conclusion is no longer
derivable. Rational strategies for revising a theory have been the
object of several studies (see, for instance, Alchourrón, Gärdenfors,
and Makinson 1985; Gärdenfors 1987). In the legal domain, this idea was
originally proposed Alchourrón and Makinson (1981) and was subsequently
developed by Maranhao (2013).
The other strategy, defeasibility , assumes that general
propositions are defaults, that are meant govern most cases or the
normal cases. Thus, we can consistently endorse such propositions and
deny that they apply to certain cases: the exception serves the rule, or
at least it does not compromise the rule. To deal with an anomalous case
on a defeasibility strategy, we do not abandon the default or change its
formulation, but instead we assume that the default’s operation is
limited on grounds that are different from those that support the use of
the default itself. As we saw in the previous example, these grounds may
provide an argument that undercuts or rebuts the argument warranted by
the default. The idea that legal norms are defaults (rather than strict
rules) makes possible a certain degree of stability in legal knowledge:
we do not need to change our norms whenever their application is limited
through subsequent exceptions or distinctions. However, this perspective
does not exclude the need to abandon a norm, when it no longer reflects
a “normal” connection, being superseded by subsequent norms (as in
implicit derogation), or when it is explicitly removed from the
knowledge base (as in explicit derogation: see Governatori and Rotolo
2010).