loading page

User-Guided Global Explanations for Deep Image Recognition: A User Study
  • +2
  • Mandana Hamidi-Haines,
  • Zhongang Qi,
  • Alan Paul Fern,
  • Li Fuxin,
  • Prasad Tadepalli
Mandana Hamidi-Haines
Oregon State University
Author Profile
Zhongang Qi
Tencent
Author Profile
Alan Paul Fern
Oregon State University
Author Profile
Li Fuxin
Oregon State University
Author Profile
Prasad Tadepalli
Oregon State University
Author Profile

Abstract

We study a user-guided approach for producing global explanations of deep networks for image recognition. The global explanations are produced with respect to a test data set and give the overall frequency of different “recognition reasons” across the data. Each reason corresponds to a small number of the most significant human-recognizable visual concepts used by the network. The key challenge is that the visual concepts cannot be predetermined and those concepts will often not correspond to existing vocabulary or have labelled data sets. We address this issue via an interactive-naming interface, which allows users to freely cluster significant image regions in the data into visually similar concepts. Our main contribution is a user study on two visual recognition tasks. The results show that the participants were able to produce a small number of visual concepts sufficient for explanation and that there was significant agreement among the concepts, and hence global explanations, produced by different participants.

Peer review status:UNDER REVIEW

04 Jun 2021Submitted to Applied AI Letters
18 Jun 2021Submission Checks Completed
18 Jun 2021Assigned to Editor
25 Jun 2021Reviewer(s) Assigned
27 Jul 2021Review(s) Completed, Editorial Evaluation Pending
29 Jul 2021Editorial Decision: Revise Minor
25 Aug 20211st Revision Received
26 Aug 2021Submission Checks Completed
26 Aug 2021Assigned to Editor
13 Sep 2021Reviewer(s) Assigned