ArrayBot: Reinforcement Learning for Generalizable Distributed Manipulation through Touch

Zhengrong Xue* 1,2,3,4,  Han Zhang* 1,2,  Jingwen Cheng1,  Zhengmao He2, 
Yuanchen Ju1,5,  Changyi Lin2,  Gu Zhang2,4,  Huazhe Xu1,2,3,
*Indicates Equal Contribution
1Tsinghua University,  2Shanghai Qi Zhi Institute,  3Shanghai AI Lab, 
4Shanghai Jiao Tong University,  5Southwest University

Abstract

We present ArrayBot, a distributed manipulation system consisting of a 16x16 array of vertically sliding pillars integrated with tactile sensors, which can simultaneously support, perceive, and manipulate the tabletop objects. Towards generalizable distributed manipulation, we leverage reinforcement learning (RL) algorithms for the automatic discovery of control policies. In the face of the massively redundant actions, we propose to reshape the action space by considering the spatially local action patch and the low-frequency actions in the frequency domain. With this reshaped action space, we train RL agents that can relocate diverse objects through tactile observations only. Surprisingly, we find that the discovered policy can not only generalize to unseen object shapes in the simulator but also transfer to the physical robot without any domain randomization. Leveraging the deployed policy, we present abundant real-world manipulation tasks, illustrating the vast potential of RL on ArrayBot for distributed manipulation.

Video Presentation