Defeasible Reasoning and
Probability
Probability calculus—especially its versions based on the idea of
subjective probability—provides an attractive alternative to
defeasible reasoning as a method for dealing with limited and
provisional information. It has a rich history of successful
applications in many domains of science and practice, including legal
practice (though its legal applications are still controversial: see
Fenton, Neil, and Berger 2016) and has recently found many applications
in artificial intelligence.
Consider, for instance, a case where Tom was run over by a car carrying
Mary and John, and in which it is not clear who was driving at the time
of the accident.
On the probabilistic approach, conflicting evidence does not lead us to
incompatible belief —like the belief that John was driving the car
when the car ran over Tom, and the belief Mary was driving the car on
the same occasion— between which a choice is needed. We rather come to
the consistent view that incompatible hypotheses have different
probabilities. For instance, on the basis of the available evidence, we
may consistently conclude that there is a 40 percent probability that
John was driving, and a 60 percent probability that Mary was doing it.
Probabilistic inference uses probability calculus to determine the
probability of an event on the basis of the probability of other events.
For instance, if there is an 80 percent probability that Tom will have
problems walking because he has been run over, there is a 32 percent
probability (40 percent * 80 percent) that Tom will have such problems
having been run over by John, and a 48 percent chance (60 percent * 80
percent) that he will have such problems having been run over by Mary.
Here I cannot enter probability calculus or discuss the many difficult
issues related to it, especially when ideas of probability and causation
are combined, or when Bayesian reasoning is used to determine the
probability of a hypothesis in light of the evidence. I will merely
highlight three issues that make probability calculus inadequate as a
general approach for dealing with uncertainty in legal reasoning.
The first issue is that of practicability: we often do not have enough
information to assign numerical probabilities in a sensible way. For
instance, how do I know that there is a 40 percent probability that John
was driving and a 60 percent probability that Mary was driving? In such
circumstances, it seems that we must attribute probabilities arbitrarily
or, no less arbitrarily, we must assume that all alternative ways in
which things may have turned out have the same probability.
The second issue is conceptual: although it makes sense to ascribe
probabilities to factual propositions, it makes little sense to assign
probabilities to legal rules and principles, unless we are making
predictions. A legal decision-maker does not usually decide to use a
normative premise by assessing the probability that the premise holds.
The third issue relates to psychology: humans tend to face situations of
uncertainty by choosing to endorse hypothetically one of the available
epistemic or practical alternatives (while keeping open the chance that
other options may turn out to be preferable), and by applying their
reasoning to this hypothesis (while possibly, at the same time,
exploring what would be the case if things turn out to be different). We
do not usually assign probabilities and then compute what further
probabilities follow from such an assignment. When we have definite
beliefs or hypotheses, we are usually good at developing inference
chains, storing them in our minds (keeping them dormant until needed),
and then retracting any such chains when one of its links is defeated.
Conversely, we are bad at assigning numerical probabilities, and even
worse at deriving further probabilities and revising probability
assignments in light of further information.
Our inability to work with numerical probabilities certainly figures
among the many failures of human cognition (like our inability to
quickly execute large arithmetical calculations). In fact, computer
systems exist which can handle efficiently complex probability networks
(otherwise termed belief networks, Bayesian networks ). They
perform very well in certain domains by manipulating numerical
probabilities much faster and more accurately than a normal person (see
Russell and Norvig 2010, chap. 13). However, our bias toward exploring
alternative scenarios, and defeasibly endorsing one of them, does have
some advantages: it focuses cognition on the implications of the most
likely situations, it supports making long reasoning chains, it
facilitates building scenarios (or stories) which may then be evaluated
according to their coherence, it enables us to link epistemic cognition
with binary decision-making (it may be established that we have to adopt
decision Q if P is the case, and NON-Q if P
is not the case). There is indeed psychological evidence that humans
develop theories even under situations of extreme uncertainty, when no
reasonable probability assignment can be made.
The limited applicability of probability calculus in many domains does
not exclude that there may be various practical and legal issues where
statistics and probability provide decisive clues, as when scientific
evidence is at issue.
Recently, approaches have been developed that try to combine defeasible
reasoning and probability by working out the likelihood that different
premises and combinations of them will be used in making arguments and
that these will interact with other arguments. Such approaches would
lead to probabilistic refinements of the IN and OUT labelling previously
considered: rather than just saying that an argument is IN or OUT, we
could establish that it has a certain probability of being IN or OUT
relative to an argumentation basis whose premises or combinations of
them are assigned certain probabilities (Riveret, Rotolo, and Sartor
2012; Hunter 2013).