Simple Science

Cutting edge science explained simply

# Computer Science# Robotics

Introducing the THÖR-MAGNI Dataset for Human-Robot Interaction

A new dataset to study human motion and robot collaboration in indoor spaces.

― 5 min read


THÖR-MAGNI: Human-RobotTHÖR-MAGNI: Human-RobotInteraction Datasethuman and robot collaboration.New dataset enhances understanding of
Table of Contents

In recent times, researchers have focused on how humans and robots move and interact indoors. To help with this, we introduce a dataset called THÖR-MAGNI. This dataset contains a large amount of information about how people and robots navigate and interact in indoor spaces. It aims to assist those studying social navigation, which includes predicting how humans move, looking at how people and robots work together, and understanding where people focus their attention during interactions.

Purpose of the Dataset

THÖR-MAGNI was created to provide more options for researchers studying Human Motion and Human-Robot Interaction (HRI). Earlier Datasets lacked important details about factors that influence how people behave in different situations. These missing details made it hard to create strong models that can predict how people act based on their surroundings. THÖR-MAGNI aims to fill this gap by offering a wider variety of features and scenarios.

Dataset Features

The THÖR-MAGNI dataset includes various types of data and different situations to help researchers isolate specific factors during their studies. It contains:

  • Data on how people and robots move in different social settings.
  • Annotations that provide context for the recorded actions.
  • Data captured from multiple sources like walking paths, where people are looking, and sensor data from robots.

The dataset is unique because it captures natural interactions between humans and robots in settings designed to reflect real-world situations.

Growth of Human Motion Research

In the past few years, the study of human motion has become increasingly important. This interest has been driven by industries looking for safer ways to have robots work alongside people. For robots to navigate safely in environments shared with humans, they need accurate models of human behavior. The THÖR-MAGNI dataset is crucial for this research as it provides the necessary data for understanding human actions.

Recording Setup

The THÖR-MAGNI dataset was recorded in a controlled environment equipped with advanced technology. This included multiple sensors that captured detailed movements. The data collection occurred over several days, where participants engaged in various tasks that required movement and interaction with robots.

Types of Tasks

Participants were assigned specific roles that required them to move, collaborate with other people, and interact with robots. Some tasks included:

  • Navigating through the environment.
  • Transporting different objects.
  • Engaging with a robot that was moving on its own.

These tasks were designed to resemble real-world scenarios, making the data more applicable for future research.

Multiple Data Modalities

The dataset includes a range of different data types to provide a complete picture of the actions taking place. This includes:

  • Trajectories showing how participants moved from one point to another.
  • Eye-tracking data that reveals where participants were looking during their movements.
  • Environmental data captured by robots, such as 3D point clouds.

By integrating these different types of data, researchers can better analyze human behavior in social settings.

Analysis and Applications

The THÖR-MAGNI dataset can be used in several research areas. It allows for:

  • Better prediction of human movement based on the context of their actions.
  • Exploration of how different factors, such as the presence of a robot, influence human behavior.
  • Testing and improving the algorithms used for human-robot interaction.

Importance of Contextual Data

Understanding the context is vital when studying how people move and interact. The THÖR-MAGNI dataset incorporates various Contextual Features, such as the presence of obstacles, directions, and areas of interest, allowing researchers to analyze how these factors affect human behavior.

Impact of External Factors

Human movement is affected by many external factors, like the layout of the environment, the actions of other individuals, and tasks at hand. The dataset captures this complexity, which is essential for creating models that can predict human actions accurately.

Scope of the Dataset

The THÖR-MAGNI dataset comprises 52 recordings, totaling over 3.5 hours of data. It features 40 participants who performed various navigation and interaction tasks. The diversity in tasks and the number of participants make this dataset a rich resource for studying human behavior.

Tools for Researchers

To assist researchers in working with the dataset, we provide a set of tools for visualizing and processing the data. These tools are designed to help users understand and utilize the dataset effectively.

Visualization Dashboard

A user-friendly dashboard is available that allows users to visualize the movement trajectories and eye-tracking data. This can help in analyzing how participants behaved during the recorded tasks.

Data Processing Package

In addition to visualization, a specialized software package is provided for filtering and preprocessing the data. Researchers can clean the data and prepare it for analysis, making it easier to draw meaningful conclusions.

Future Goals

Moving forward, we plan to create a benchmark for predicting human movements based on the rich data provided in THÖR-MAGNI. The aim is to enhance the accuracy of models that help understand and predict human actions in indoor environments.

Conclusion

The THÖR-MAGNI dataset is a valuable resource for researchers studying human and robot interaction. By providing comprehensive data on human motion and the factors influencing it, the dataset plays a crucial role in advancing the field of human-robot collaboration.

Acknowledgments

We appreciate the contributions of colleagues and support from various programs that made this research possible. Their help was vital in creating the THÖR-MAGNI dataset as a significant tool for future studies in human motion analysis and interaction with robots.

Original Source

Title: TH\"OR-MAGNI: A Large-scale Indoor Motion Capture Recording of Human Movement and Robot Interaction

Abstract: We present a new large dataset of indoor human and robot navigation and interaction, called TH\"OR-MAGNI, that is designed to facilitate research on social navigation: e.g., modelling and predicting human motion, analyzing goal-oriented interactions between humans and robots, and investigating visual attention in a social interaction context. TH\"OR-MAGNI was created to fill a gap in available datasets for human motion analysis and HRI. This gap is characterized by a lack of comprehensive inclusion of exogenous factors and essential target agent cues, which hinders the development of robust models capable of capturing the relationship between contextual cues and human behavior in different scenarios. Unlike existing datasets, TH\"OR-MAGNI includes a broader set of contextual features and offers multiple scenario variations to facilitate factor isolation. The dataset includes many social human-human and human-robot interaction scenarios, rich context annotations, and multi-modal data, such as walking trajectories, gaze tracking data, and lidar and camera streams recorded from a mobile robot. We also provide a set of tools for visualization and processing of the recorded data. TH\"OR-MAGNI is, to the best of our knowledge, unique in the amount and diversity of sensor data collected in a contextualized and socially dynamic environment, capturing natural human-robot interactions.

Authors: Tim Schreiter, Tiago Rodrigues de Almeida, Yufei Zhu, Eduardo Gutierrez Maestro, Lucas Morillo-Mendez, Andrey Rudenko, Luigi Palmieri, Tomasz P. Kucner, Martin Magnusson, Achim J. Lilienthal

Last Update: 2024-03-14 00:00:00

Language: English

Source URL: https://arxiv.org/abs/2403.09285

Source PDF: https://arxiv.org/pdf/2403.09285

Licence: https://creativecommons.org/licenses/by/4.0/

Changes: This summary was created with assistance from AI and may have inaccuracies. For accurate information, please refer to the original source documents linked here.

Thank you to arxiv for use of its open access interoperability.

More from authors

Similar Articles