Abstract
As intelligent agents become more autonomous, sophisticated, and prevalent, it becomes increasingly important that humans interact with them effectively. Machine learning is now used regularly to acquire expertise, but common techniques produce opaque content whose behavior is difficult to interpret. Before they will be trusted by humans, autonomous agents must be able to explain their decisions and the reasoning that produced their choices. We will refer to this general ability as explainable agency. This capacity for explaining decisions is not an academic exercise. When a self-driving vehicle takes an unfamiliar turn, its passenger may desire to know its reasons. When a synthetic ally in a computer game blocks a player's path, he may want to understand its purpose. When an autonomous military robot has abandoned a high-priority goal to pursue another one, its commander may request justification. As robots, vehicles, and synthetic characters become more self-reliant, people will require that they explain their behaviors on demand. The more impressive these agents' abilities, the more essential that we be able to understand them.
Original language | English |
---|---|
Title of host publication | Proceedings of the Twenty-Ninth AAAI Conference on Innovative Applications (IAAI-17) |
Publisher | Association for the Advancement of Artificial Intelligence |
Number of pages | 2 |
Publication status | Published - 6 Feb 2017 |
Event | The Twenty-Ninth AAAI Conference on Innovative Applications (IAAI-17) - Sam Francisco, United States Duration: 6 Feb 2017 → 9 Feb 2017 |
Conference
Conference | The Twenty-Ninth AAAI Conference on Innovative Applications (IAAI-17) |
---|---|
Country/Territory | United States |
City | Sam Francisco |
Period | 6/02/17 → 9/02/17 |
Keywords
- Autonomous agents
- Cognitive systems
- Explanation