Abstract
There are many forms of human-robot teamwork, ranging from scenarios in which humans act as supervisors, monitoring the robot’s behavior and stepping in when necessary, to more collaborative situations where humans and robots work together seamlessly. In these cooperative setups, a robot may handle specific manual tasks while the human focuses on others, complementing each other’s strengths to achieve a common goal. Throughout this spectrum of teamwork, it is of utmost importance that the robot is able to explain its actions to the human involved. This is to maintain safety and ensure that the robot does not take incorrect actions. Human-robot teams are increasingly desired in hazardous and, often, highly regulated domains where requirements engineering plays a crucial role in the development process. However, requirements for human-robot teams and the explainability features that are needed present a gap in the literature. To fill this gap, we present a novel catalog of explainability requirement patterns for human-robot teamwork. Our pattern catalog addresses the identified gap by incorporating human-centered features and providing reusable templates. This catalog is derived from realworld industrial use cases, demonstrating its applicability and effectiveness in meeting explainability needs in critical domains. To aid verification and understanding, we formalize these patterns using NASA’s Formal Requirements Elicitation Tool (FRET) which provides a logical semantics for each pattern.
Original language | English |
---|---|
Title of host publication | IEEE International Conference on Engineering Reliable Autonomous Systems |
Publisher | IEEE |
Publication status | Accepted/In press - 20 Mar 2025 |