Cross-domain representation learning for clothes unfolding in robot-assisted dressing

Jinge Qie, Yixing Gao*, Runyang Feng, Xin Wang, Jielong Yang, Esha Dasgupta, Hyung Jin Chang, Yi Chang*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

220 Downloads (Pure)

Abstract

Assistive robots can significantly reduce the burden of daily activities by providing services such as unfolding clothes and assistive dressing. For robotic clothes manipulation tasks, grasping point recognition is one of the core steps, which is usually achieved by supervised deep learning methods using large amounts of labeled training data. Given that collecting real labeled data is extremely labor-intensive and time-consuming in this filed, synthetic data generated by physics engines is typically adopted for data enrichment. However, there exists an inherent discrepancy between real and synthetic domains. Therefore, effectively leveraging synthetic data together with real data to jointly train models for grasping point recognition is desirable. In this paper, we propose a Cross-Domain Representation Learning (CDRL) framework that adaptively extracts domain-specific features from synthetic and real domain respectively, before further fusing these domain-specific features to produce more informative and robust cross-domain representations, thereby improving the prediction accuracy of the grasping points an assistive robot must take advantage of. Experimental results show that our CDRL framework is capable of recognizing grasping points more precisely than when compared with five baseline methods. Based on our CDRL framework, we enable a Baxter humanoid robot to unfold a hanging white coat with a 92% success rate and to successfully assist 6 users in dressing.
Original languageEnglish
Title of host publicationComputer Vision – ECCV 2022 Workshops
Subtitle of host publicationTel Aviv, Israel, October 23–27, 2022. Proceedings, Part VI
PublisherSpringer, Cham
ChapterW27
Pages658-671
Number of pages14
ISBN (Electronic)978-3-031-25075-0
ISBN (Print)978-3-031-25074-3
DOIs
Publication statusPublished - 19 Feb 2023
EventTenth International Workshop on Assistive Computer Vision and Robotics - Tel Aviv, Israel
Duration: 24 Oct 202224 Oct 2022

Publication series

NameLecture Notes in Computer Science
PublisherSpringer, Cham
Volume13806
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Workshop

WorkshopTenth International Workshop on Assistive Computer Vision and Robotics
Abbreviated titleACVR 2022
Country/TerritoryIsrael
CityTel Aviv
Period24/10/2224/10/22

Fingerprint

Dive into the research topics of 'Cross-domain representation learning for clothes unfolding in robot-assisted dressing'. Together they form a unique fingerprint.

Cite this