AI

Zero-shot learning helps Intrinsic pave the future for robotics

Roboticists are learning new ways to teach robots to grasp items and to work in coordination with other robots at much faster speeds—skills needed in modern factories that use automation to assemble cars and computers, among other products.

At Automate 2024 in Chicago this week, AI robotics company Intrinsic is showing its work with Nvidia and Google DeepMind Robotics. Using Nvidia Isaac Manipulator foundation models for grasping skills, Intrinsic worked with its customer Trumpf Machine Tools.  The grasping skill was trained with 100% synthetic data generated by Isaac Sim.   Isaac Manipulator was unveiled in March at Nvidia GTC2024.

“Instead of hard-coding specific grippers to grasp specific objects in a certain way, efficient code for a particular gripper and object is auto-generated to compete the task using the foundation model and synthetic training data,” explained Wendy Tan White, CEO of Intrinsic in a blog.

Using AI foundation models means companies can program a number of robot configurations that can then generalize and interact with diverse objects in the real world. “In the future, developers will be able to use ready-made universal grasping skills like these to greatly accelerate their robot programming,” White added. Such a capability can have a profound impact, including reducing development costs.

 A foundation model is based on transformer deep learning that allows a neural network to learn by tracking relationships in data. Such models are trained on huge datasets and can be used to process and understand robot and sensor information, similar to the way ChatGPT works for text, Nvidia explained.

Such a capability enables robot perception and decision-making, providing zero-shot learning—the ability to perform tasks without prior examples.

With Google DeepMind, Intrinsic has developed a universal automatic AI-based robot motion planner so that one or more robots can work together while sharing the same workspace. Intrinsic uses a model trained with synthetic data from a physics engine where input relies on models for geometry, robot kinematics, robot dynamics and robot task description. The model is trained in the cloud and the output is a model that represents “near-optimal robot motion paths and trajectories, usually outperforming solutions from human experts,” White said.

The company released a video 

showing four robots assembling a box in concert with each other.The result was 100% ML-generated to orchestrate the four robots working on a scaled-down car welding application simulation. The motions plans for each robot are auto-generated and perform about 25% better than some traditional methods, White said.

“Robotics is AI in the physical world and we’re excited for what’s next,” White added.