[ad_1]
At one point, when you were a toddler, you learned to stand up after a fall and eventually walk on your own two feet. You were probably encouraged by your parents, but most of the time you learned by trial and error. This is not how robots like Square and Atlas from Boston Dynamics learn to walk and dance. They are meticulously coded to tackle the tasks we put on them. The results can be impressive, but they can also prevent them from adapting to situations that are not covered by their software. A joint team of researchers from Zhejiang University and the University of Edinburgh claim to have developed a better way.
Yang et al.
In one recent article published in the journal Scientific robotics, they detailed an AI-boosting approach they used to enable their dog-like robot, Jueying, to learn to walk and recover from falls on its own. The team said Wired they first trained software that could guide a virtual version of the robot. It was made up of eight AI “experts” whom they trained to master a specific skill. For example, one learned to walk fluently, while another learned to balance. Each time the digital robot successfully completed a task, the team rewarded it with a virtual point. If this all sounds familiar to you, it’s because it’s the same approach that Google recently used to form its Revolutionary MuZero algorithm
[ad_2]