This paper introduces a dataset on Human-cobot collaboration for Action Recognition in Manufacturing Assembly (HARMA3). It is a collection of RGB frames, Depth maps, RGB-to-depth-Aligned (RGB-A) frames and Skeleton data relative to actions performed by different subjects in collaboration with a cobot for building an Epicyclic Gear Train (EGT). In particular, 27 subjects executed several trials of the assembly task, which consisted of 7 actions. Data were collected in a laboratory scenario using two Microsoft® Azure Kinect cameras positioned in frontal and lateral positions. The dataset represents a good foundation for developing and testing advanced action recognition as well as action segmentation systems with far-reaching implications beyond human-cobot collaboration. Further potential applications include Computer Vision, Machine Learning, and Smart Manufacturing. Preliminary experiments for action segmentation by applying a state-of-the-art method on features extracted from RGB and skeletal data are presented in this paper, showing high-performance rates.
Romeo, L., Maselli, M.V., García Domínguez, M., Marani, R., Lavit Nicora, M., Cicirelli, G., et al. (2024). A Dataset on Human-Cobot Collaboration for Action Recognition in Manufacturing Assembly. 345 E 47TH ST, NEW YORK, NY 10017 USA : Institute of Electrical and Electronics Engineers Inc. [10.1109/codit62066.2024.10708143].
A Dataset on Human-Cobot Collaboration for Action Recognition in Manufacturing Assembly
Lavit Nicora, Matteo;
2024
Abstract
This paper introduces a dataset on Human-cobot collaboration for Action Recognition in Manufacturing Assembly (HARMA3). It is a collection of RGB frames, Depth maps, RGB-to-depth-Aligned (RGB-A) frames and Skeleton data relative to actions performed by different subjects in collaboration with a cobot for building an Epicyclic Gear Train (EGT). In particular, 27 subjects executed several trials of the assembly task, which consisted of 7 actions. Data were collected in a laboratory scenario using two Microsoft® Azure Kinect cameras positioned in frontal and lateral positions. The dataset represents a good foundation for developing and testing advanced action recognition as well as action segmentation systems with far-reaching implications beyond human-cobot collaboration. Further potential applications include Computer Vision, Machine Learning, and Smart Manufacturing. Preliminary experiments for action segmentation by applying a state-of-the-art method on features extracted from RGB and skeletal data are presented in this paper, showing high-performance rates.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.