Grounding Bodily Awareness in Visual Representations for Efficient Policy Learning

Junlin Wang, Zhiyun Lin
Southern University of Science and Technology
overview

Abstract

Learning effective visual representations for robotic manipulation remains a fundamental challenge due to the complex body dynamics involved in action execution. In this paper, we study how visual representations that carry body-relevant cues can enable efficient policy learning for downstream robotic manipulation tasks. We present Inter-token Contrast (ICon), a contrastive learning method applied to the token-level representations of Vision Transformers (ViTs). ICon enforces a separation in the feature space between agent-specific and environment-specific tokens, resulting in agent-centric visual representations that embed body-specific inductive biases. This framework can be seamlessly integrated into end-to-end policy learning by incorporating the contrastive loss as an auxiliary objective. Our experiments show that ICon not only improves policy performance across various manipulation tasks but also facilitates policy transfer across different robots. The project website: https://anonymous.4open.science/w/ICon/

Experimental Results (Franka)

Close Drawer

Close Drawer

Open Box

Open Box

Take Lid off Saucepan

Take Lid off Saucepan

Put Rubbish in Bin

Put Rubbish in Bin

Lift Cube

Close Microwave

Lift Cube

Lift

Open Door

Door

Stack Cube

Stack

Cross-Robot Transfer

Lift Cube

Lift (IIWA)

Open Door

Stack (IIWA)

Open Door

Lift (Kinova)

Open Door

Stack (Kinova)

Paper