While soft robots made of compliant materials have advantages over rigid robots, giving them autonomous control is challenging because their ability to move in many directions makes precise movement difficult. But now, MIT researchers have enabled a soft robotic arm to understand its configuration in 3D space, leveraging only motion and position data from its own “sensorized” skin.
In a paper being published in the journal IEEE Robotics and Automation Letters, the researchers describe a system of soft sensors that cover a robot’s body to provide an awareness of motion and position of its body. That feedback runs into a deep-learning model that captures clear signals to determine the robot’s 3D configuration. The researchers validated their system on a soft robotic arm resembling an elephant trunk, that can predict its own position as it autonomously swings around and extends.
The sensors can be fabricated using off-the-shelf materials, said Ryan Truby, a postdoc in the MIT Computer Science and Artificial Laboratory (CSAIL) who is co-first author on the paper along with CSAIL postdoc Cosimo Della Santina.
“We’re sensorizing soft robots to get feedback for control from sensors, not vision systems, using a very easy, rapid method for fabrication,” he says. “We want to use these soft robotic trunks, for instance, to orient and control themselves automatically, to pick things up and interact with the world. This is a first step toward that type of more sophisticated automated control.”
One goal of soft robotics has been fully integrated body sensors. While sensors fabricated from soft materials are more desirable than traditional rigid sensors in terms of design flexibility and natural compliance, they require specialized materials and methods for their design.
During his research, Truby made an interesting finding. “I found these sheets of conductive materials used for electromagnetic interference shielding, that you can buy anywhere in rolls,” he said. These materials have “piezoresistive” properties, meaning they change in electrical resistance when strained. Truby realized they could make effective soft sensors if they were placed on certain spots on the trunk.
As the piezoresistive-like sensor deforms in response to the trunk’s stretching and compressing, its electrical resistance is converted to a specific output voltage. The voltage is then used as a signal correlating to that movement.
Inspired by kirigami—a variation of origami that includes making cuts in a material—Truby designed and laser-cut rectangular strips of conductive silicone sheets into various patterns, such as rows of tiny holes or crisscrossing slices like a chain link fence. According to Truby, that made them far more flexible, stretchable, “and beautiful to look at.“
The researchers’ robotic trunk comprises three segments, each with four fluidic actuators (12 total) used to move the arm. They fused one sensor over each segment, with each sensor covering and gathering data from one embedded actuator in the soft robot. They used “plasma bonding,” a technique that energizes a surface of a material to make it bond to another material. A couple of hours are required to shape dozens of sensors that can be bonded to the soft robots using a handheld plasma-bonding device.
To estimate the soft robot’s configuration using only the sensors, the researchers built a deep neural network that sifts through the noise to capture meaningful feedback signals. The researchers developed a new model to kinematically describe the soft robot’s shape that vastly reduces the number of variables needed for their model to process.
The researchers experimented by swinging the trunk around and extending itself in random configurations over approximately an hour and a half. They used the traditional motion-capture system for ground truth data. In training, the model analyzed data from its sensors to predict a configuration, and compared its predictions to that ground truth data which was being collected simultaneously.
With this approach, the model “learns” to map signal patterns from its sensors to real-world configurations. Results indicated, that for certain and steadier configurations, the robot’s estimated shape matched the ground truth.
In the near future, the researchers aim to explore new sensor designs for improved sensitivity and to develop new models and deep-learning methods to reduce the required training for every new soft robot. They also hope to refine the system to better capture the robot’s full dynamic motions.