Scientists have come up with a really unique innovation this time. They have claimed of creating robots that can ‘see into their own future’.

Apparently, they have used a technology called ‘visual foresight which enables them to understand and work out, how to manipulate objects that they have never encountered before.

In the near future, this technology could help self-driving cars anticipate events on the road and also help to produce more intelligent robotic assistants in homes.

This robot is the innovation by the scientists from the University of California, Berkeley. Also, using this visual technology, the robots can might as well predict what their cameras will see if they perform a particular sequence of movements.

However, these robotic imaginations are still relatively simple for now and are made only several seconds into the future.

But even after this, they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

Crucially, this robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are.

The reason behind this remains that the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table.

So, after this play phase, the robot builds a predictive model of the world and can use this model to manipulate new objects that it has not encountered before.

To this, Sergey Levine, assistant professor at Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab had developed the technology, regarded that, “’In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it. This can enable intelligent planning of highly flexible skills in complex real-world situations”.

Also, at the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). These DNA-based models might be able to predict how pixels in an image will move from one frame to the next entirely based on the robot’s actions.

The recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and re-positioning multiple objects.

Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model, regarded in this entire aspect, that, “In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own”.