Kayla Boggess

Image
University of Virginia
Institution
Ph.D. Student
Bio

Kayla Boggess is a Ph.D. student in the Department of Computer Science at the University of Virginia, advised by Dr. Lu Feng. Kayla’s research is at the intersection of Explainable Artificial Intelligence, Human-in-the-Loop Cyber-Physical Systems, and Formal Methods. She is interested in developing human-focused cyber-physical systems using natural language explanations for various single and multi-agent domains such as search and rescue and autonomous vehicles. Her research has led to multiple publications at top-tier computer science conferences such as IJCAI, ICCPS, and IROS. Kayla is a recipient of the prestigious NSF NRT Graduate Research Fellowship, and the University of Virginia Engineering School Dean’s Scholar Fellowship.

Abstract

Recently, there has been increased interest in multi-agent reinforcement learning (MARL) as it enables sequential decision making over a wide range of exciting applications such as robotic search and rescue or autonomous driving. However, most MARL systems operate as a black box, causing user misunderstanding dissatisfaction, and misuse. Previous studies have shown the use of explanations can increase system transparency, improve user satisfaction and understanding, and promote human-agent collaboration. Yet, most existing work in explainable reinforcement learning (xRL) focuses on the single-agent case. Our work provides explanations for both centralized and decentralized MARL. We provide algorithms for policy summarization (What do [agents] do in policy?) of relevant agent behavior and local explanations (When do [agents] do [actions]?, Why don’t [agents] do [actions] in [states]?, What do [agents] do in [conditions]?) for a deeper explanation of specific agent behaviors. Additionally, we provide temporal explanations (Why don’t [agents] complete [task 1], followed by [task 2], and eventually [task 3]?) to understand the feasibility of agent actions under a given policy. Finally, we have performed extensive evaluations with real-world users to show the effectiveness of the generated explanations and provide an understanding of the best practices regarding the presentation of those explanations.

Email
kjb5we@virginia.edu
Website