Abstract
Explanation is key to people having confidence in high-stakes AI systems.
However, machine-learning-based systems -- which account for almost all current
AI -- can't explain because they are usually black boxes. The explainable AI
(XAI) movement hedges this problem by redefining "explanation". The
human-centered explainable AI (HCXAI) movement identifies the
explanation-oriented needs of users but can't fulfill them because of its
commitment to machine learning. In order to achieve the kinds of explanations
needed by real people operating in critical domains, we must rethink how to
approach AI. We describe a hybrid approach to developing cognitive agents that
uses a knowledge-based infrastructure supplemented by data obtained through
machine learning when applicable. These agents will serve as assistants to
humans who will bear ultimate responsibility for the decisions and actions of
the human-robot team. We illustrate the explanatory potential of such agents
using the under-the-hood panels of a demonstration system in which a team of
simulated robots collaborate on a search task assigned by a human.