Monday, August 22, 2022

A Robot Has Learned To Imagine Itself For The First Time

 A robot developed by Columbia Engineers learns more about itself than about its surroundings.

As every athlete or fashion-conscious person knows, our impression of our bodies is not always accurate or practical, but it plays an important role in how we behave in society. While you play ball or get ready, your brain is continuously planning for movement so that you can move your body without bumping, tripping, or falling.



Robots are beginning to develop their body models at the same age as humans do. Today, a group of engineers from Columbia Engineering announced that they had created a robot that, for the first time, can learn a model of its entire body from scratch without any human assistance. In a recent work published in Science Robotics, the researchers describe how their robot created a kinematic model of itself and used that model to plan motions, achieve goals, and avoid obstacles in a variety of situations. Even physical harm to its body was automatically identified and repaired.

The robot observes itself like a child playing with itself in a room full with mirrors.

A robotic arm was positioned in front of a group of five streaming video cameras by the researchers. Through the cameras, the robot observed itself as it freely oscillated. The robot squirmed and twisted to discover precisely how its body moved in reaction to various motor inputs, like a baby discovering itself for the first time in a hall of mirrors. The robot eventually halted after roughly three hours. Its inbuilt deep neural network had finished figuring out how the robot's movements related to how much space it took up in its surroundings.

Hod Lipson, professor of mechanical engineering and director of Columbia's Creative Machines Lab, where the work was done, said, "We were particularly intrigued to understand how the robot envisaged itself." But because a neural network is a dark box, you can't merely glance inside one. The self-image eventually came into existence after the researchers tried with numerous visualisation techniques. The robot's three-dimensional body looked to be engulfed by a type of softly flashing cloud, according to Lipson. "The flickering mist gently followed the robot as it travelled." The self-model of the robot was precise to 1% of its workspace.

Self-modeling robots will result in autonomous systems that are more self-sufficient.

Robots should be able to create models of themselves without assistance from engineers for a variety of reasons. It not only reduces labour costs, but also enables the robot to maintain its own wear and tear, as well as identify and repair damage. The authors contend that this capability is crucial since increased independence is required of autonomous systems. For example, a factory robot could see that something isn't moving properly and make adjustments or request assistance.

Boyuan Chen, the study's first author and an assistant professor at Duke University, said, "We humans obviously have a notion of self. "Close your eyes and attempt to picture how your body would move if you were to do something, like extend your arms forward or go backward. We have a self-model, or notion of self, somewhere in our brains that tells us how much of our immediate surroundings we occupy and how that volume changes as we move.

Robot self-awareness

The project is a component of Lipson's decades-long search for strategies to give robots a semblance of self-awareness. He said, "Self-modeling is a basic sort of self-awareness. A robot, animal, or human that has a realistic self-model has an evolutionary advantage because it can function better in the real environment and make better decisions.

The limitations, dangers, and issues associated with providing robots more autonomy through self-awareness are known to the researchers. The level of self-awareness shown in this study is, as Lipson notes, "trivial compared to that of humans, but you have to start somewhere." Lipson is eager to acknowledge this. We must go cautiously and deliberately in order to maximise our chances of success and reduce our exposure to risk.

Boyuan Chen, Robert Kwiatkowski, Carl Vondrick, and Hod Lipson, "Fully bodied visual self-modeling of robot morphologies," Science Robotics, 13 July 2022.

No comments:

Post a Comment