Bridging the Communication Gap: Artificial Agents Learning Sign Language through Imitation

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

10 Downloads (Pure)

Abstract

This paper explores acquiring non-verbal communication skills through learning from demonstrations, with potential applications in sign language comprehension and expression. In particular, we focus on imitation learning for artificial agents, exemplified by teaching a simulated humanoid American Sign Language. We use computer vision and deep learning to extract information from videos, and reinforcement learning to enable the agent to replicate observed actions. Compared to other methods, our approach eliminates the need for additional hardware to acquire information. We demonstrate how the combination of these different techniques offers a viable way to learn sign language. Our methodology successfully teaches 5 different signs involving the upper body (i.e., arms and hands). This research paves the way for advanced communication skills in artificial agents.
Original languageEnglish
Title of host publication 16th International Conference on Social Robotics + AI (ICSR 2024)
Pages 460–474
Volume15561
DOIs
Publication statusPublished - 25 Mar 2025

Publication series

Name Lecture Notes in Computer Science

Keywords

  • Human-Robot Interaction
  • Sign language
  • Imitation learning

Fingerprint

Dive into the research topics of 'Bridging the Communication Gap: Artificial Agents Learning Sign Language through Imitation'. Together they form a unique fingerprint.

Cite this