Affordance-based Robot Manipulation
with Flow Matching

We present a framework for assistive robot manipulation, which focuses on two fundamental challenges: first, efficiently adapting large-scale models to downstream scene affordance understanding tasks, especially in daily living scenarios where gathering multi-task data involving humans requires strenuous effort; second, effectively learning robot action trajectories by grounding the visual affordance model. We tackle the first challenge by employing a parameter-efficient prompt tuning method that prepends learnable text prompts to the frozen vision model to predict manipulation affordances in multi-task scenarios. Then we propose to learn robot action trajectories guided by affordances in a supervised flow matching method. Flow matching represents a robot visuomotor policy as a conditional process of flowing random waypoints to desired robot action trajectories. Finally, we introduce a real-world dataset with 10 tasks across Activities of Daily Living to test our framework. Our extensive evaluation highlights that the proposed prompt tuning method for learning manipulation affordance achieves competitive performance and even outperforms some other finetuning protocols across data scales, while satisfying parameter efficiency. Learning multi-task robot action trajectories with flow matching leads to consistently favorable results in several robot manipulation benchmarks than some alternative behavior cloning methods. This includes more stable training and evaluation, and noticeably faster inference, while maintaining comparable generalization performance to diffusion policy, where flow matching performs marginally better in most cases. Our framework seamlessly unifies affordance learning and action generation with flow matching for robot manipulation.


Highlights

Flow matching represents a robot visuomotor policy as a conditional process of flowing random waypoints to desired robot actions.
Prompt tuning for vision-language-model to predict manipulation affordances in multi-task scenarios.
Flow matching exhibits more stable training and evaluation, and noticeably faster inference, while maintaining comparable generalization performance to diffusion policy, where flow matching performs marginally better in most cases.
An example of robot feeding the human with flow matching.

Real-world Experiments

Flow Matching has been tested on 10 tasks across Activities of Daily Living, and leads to consistently better performance than alternative behavior cloning methods. (Videos are 4x speed)

Comb the hair
Sweep the trash
Hang the towel
Pass the water
Put on the hat
Wipe the forearm
Brush the teeth
😠ðŸ˜ē😊ðŸĪ–🧑‍ðŸĶē📊🔎🏍ïļðŸĶū

Simulation Experiments

Franka Kitchen
Robomimic
Kuka Block Stacking
PushT

Paper

2409.01083 [cs.RO].
Affordance-based Robot Manipulation with Flow Matching
Fan Zhang, Michael Gienger

Code is here https://github.com/HRI-EU/flow_matching.
We are in process of integrating flow matching into the Hugging Face ðŸĪ— LeRobot PushT task.


Team


This webpage template was recycled from here.