In order for robots to be integrated effectively into human work-flows, it is not enough to address the question of autonomy but also how their actions or plans are being perceived by their human counterparts. When robots generate task plans without such considerations, they may often demonstrate what we refer to as inexplicable behavior from the point of view of humans who may be observing it. This problem arises due to the human observer's partial or inaccurate understanding of the robot's deliberative process and/or the model (i.e. capabilities of the robot) that informs it. This may have serious implications on the human-robot work-space, from increased cognitive load and reduced trust in the robot from the human, to more serious concerns of safety in human-robot interactions. In this paper, we propose to address this issue by learning a distance function that can accurately model the notion of explicability, and develop an anytime search algorithm that can use this measure in its search process to come up with progressively explicable plans. As the first step, robot plans are evaluated by human subjects based on how explicable they perceive the plan to be, and a scoring function called explicability distance based on the different plan distance measures is learned. We then use this explicability distance as a heuristic to guide our search in order to generate explicable robot plans, by minimizing the plan distances between the robot's plan and the human's expected plans. We conduct our experiments in a toy autonomous car domain, and provide empirical evaluations that demonstrate the usefulness of the approach in making the planning process of an autonomous agent conform to human expectations.
from cs.AI updates on arXiv.org http://ift.tt/2fBqjvP
via IFTTT
No comments:
Post a Comment