At the University of Texas at Dallas’ Intelligent Robotics and Vision Laboratory, a robot nudges a toy package of butter across a table. With each push, the robot learns to recognize the object through a new system developed by a team of UT Dallas computer scientists.
In the new system, the robot presses an object multiple times to collect a series of images that allow the system to segment the entire series of objects until the robot recognizes them. Previous approaches relied on the robot pressing or grabbing an object once to “learn” it.
The team presented their research paper at the Robotics: Science and Systems conference, held in Daegu, South Korea, from July 10 to 14. Papers for the conference were selected based on novelty, technical quality, significance, potential impact, and clarity.
We’re still a long way from having a robot cook dinner, clear the kitchen table or empty the dishwasher, but the research group has made great progress in a robotic system that uses artificial intelligence to help the robot identify and remember objects more accurately, said Dr. Yu Xian, lead author of the paper.
“If you ask a robot to pick up a mug or bring you a bottle of water, it needs to recognize those objects,” said Hsian, an assistant professor of computer science in the Eric Johnson School of Engineering and Computer Science.
The UTD researchers’ technology is designed to help robots detect different objects found in environments such as homes and generalize or identify similar versions of common items, such as water bottles, across different brands, shapes and sizes.
Cyan’s lab has a storage bin full of packages of common toy foods — spaghetti, ketchup, carrots — that she uses to train the lab’s robot, LAMP. LAMP is a mobile manipulator robot from Fetch Robotics that stands on a circular moving platform and stands about four feet tall. LAMP has a long mechanical arm with seven joints. At the end of it is a square “hand” with two fingers for grasping objects.
Hsian said the robot learns to recognise objects in a similar way that a child learns to interact with a toy.
“After you push an object, the robot will recognize it,” says Hsian, “and then we can use that data to train an AI model, so the next time the robot sees the object, you won’t have to push it again. The second time it sees the object, it’ll just pick it up.”
What’s new about the researchers’ method is that the robot presses each item 15 to 20 times, compared to just one press in traditional interactive recognition methods. Xiang said multiple presses allow the robot to take more photos with its RGB-D camera, which includes a depth sensor, and learn more about each item, reducing the chance of mistakes.
The task of recognizing, distinguishing and remembering objects, known as segmentation, is one of the key capabilities a robot needs to complete a task.
“To our knowledge, this is the first system that leverages long-term robot interaction for object segmentation,” Hsian said.
Ninad Khalgonkar, a doctoral student in computer science, said working on the project helped improve the algorithms that help the robot make decisions.
“It’s one thing to develop an algorithm and test it on an abstract dataset, but it’s another to test it on real-world tasks,” Kalgonkar said. “Seeing how it performed in the real world was a key learning experience.”
The researchers’ next step is to improve other features, such as planning and control, that would enable tasks such as sorting recycled materials.
For more information:
Self-supervised unseen object instance segmentation through long-term robot interaction: www.roboticsproceedings.org/rss19/p017.pdf
Courtesy of University of Texas at Dallas
Quote: New AI technology will dramatically improve robot recognition skills (August 31, 2023) Retrieved June 5, 2024 from https://techxplore.com/news/2023-08-ai-technology-robot-recognition-skills.html
This document is subject to copyright. It may not be reproduced without written permission, except for fair dealing for the purposes of personal study or research. The content is provided for informational purposes only.