Robotic grasping under uncertainty remains a fundamental challenge due to its uncertain and contact-rich nature. Traditional rigid robotic hands, with limited degrees of freedom and compliance, rely on complex model-based and heavy feedback controllers to manage such interactions. Soft robots, by contrast, exhibit embodied mechanical intelligence: their underactuated structures and passive flexibility of their whole body naturally accommodate uncertain contacts and enable adaptive behaviors. To harness this capability, we propose a lightweight actuation-space learning framework that infers distributional control representations for whole-body soft robotic grasping directly from deterministic demonstrations using a flow matching model (Rectified Flow), without requiring dense sensing or heavy control loops. Trained with only 30 demonstrations covering less than 8% of the reachable workspace, the learned policy achieved a 97.5% grasp success rate over 1000 trials in simulation. In real-world experiments on 50 uniformly distributed targets, the policy achieved a 100% success rate, generalized to object size variations from -33% to +100%, and remained stable under execution-time scaling from 20% to 200%. These results demonstrate that actuation-space learning effectively embeds mechanical intelligence into control, significantly reducing reliance on centralized computation for grasping under uncertainty.
article YBW+26
BibTeXKey: YBW+26