| October 29, 2019
The National Academies Board on Human-Systems Integration (BOHSI) organized a session exploring the state of the art and research and design frontiers for intelligent systems that support effective human machine teaming. An important element in the success of human machine teaming is the ability of the person on the scene to develop appropriate trust in the automated software (including recognizing when it should not be trusted). Research is being conducted in the Human Factors community and the Artificial Intelligence (AI) community on the characteristics that software need to display in order to foster appropriate trust. For example, there is a DARPA program on Explainable AI (XAI). The Panel brings together prominent researchers from both the Human Factors and AI communities to discuss the current state of the art, challenges and short-falls and ways forward in developing systems that engender appropriate trust.
| | Session Materials
Panel Summary
Presentations
Emilie Roth - Explainable AI, System Transparency, and Human Machine Teaming
William J. Clancey - Critical Thinking about AI and Explanation
Mica R. Endsley - Explainable and Transparent AI
Robert Hoffman - Conceptual Model and Measurement Issues for Explainable AI
Marc Steinberg - Explainable AI, Transparency, and Human Machine Teaming
Board Sponsors
Core funding for BOHSI is provided by the Human Factors and Ergonomics Society, the National Aeronautics and Space Administration, and the U.S. Army Research Laboratory. |