INTRODUCTION
The general objective of XAI is to develop methods that enable practical
use of an AI tool, including understanding the system’s capabilities and
vulnerabilities. This knowledge makes it possible for users to act
appropriately, such as cross-checking and complementing the automated
work to accomplish the intended function within a broader established
activity. “Explanation” is one way to assist people in gaining this
expertise.
What are the best ways to explain complex systems? Can we facilitate
learning by promoting self-explaining? What pedagogical approaches
should computer-based tutoring systems use, and should they be derived
from studies teachers interacting with students? These were among the
questions driving AI research in the area of Intelligent Tutoring
Systems (ITSs) since the 1970s [23, 24, 27]. We illustrate this work
with an ITS for image interpretation that uses statistical analysis to
relate features of images, MR Tutor [22]. In MR Tutor
“explanation” is framed as an instructional activity for
learning how to carry out a diagnostic task using an AI program as an
aid. The following sections outline how models in this program and other
ITS systems are created and used, followed by comparison to XAI
objectives and methods.