These Virtual Obstacle Courses Help Real Robots Learn to Walk
An army of more than 4,000 marching like dogs robots an obscure frightening sight, even in a simulation. But it could teach the way for machines to learn new tricks.
The virtual robot army was created by researchers from ETH Zurich in Switzerland and chipmaker Nvidia. They use wandering bots to train with algorithm used that to control the legs of a real world robot.
In the simulation, the machines – are called whatever-Face challenges like slopes, steps, and steep threes in a virtual landscape. Every time a robot learns to navigate a challenge, the researchers show the more difficult one, discarding the control algorithm to become more sophisticated.
From a distance, the resulting scenes resemble a swarm of ants swarming in multiple places. During training, the robots were able to walk up the stairs very quickly; more complex obstacles last the longest. Settling the slopes proved difficult, even if some of the virtual robots knew how to slide it.
When the resulting algorithm is transferred to a true version of ANYmal, a four-legged robot about the size of a large dog with sensors on its head and a separate arm of the robot, it was able to navigate stairs and blocks but suffered problems at much higher speeds. The researchers blamed shortcomings in how its sensors perceived the real world compared to simulation,
The same difference in robot learning can help machines learn everything differently from useful things, from sorting packages on sewing clothes and harvest crops. The project also demonstrates the importance of simulation and custom computer chips for future development to be used. artificial intelligence.
“At a high level, the easiest to simulate is a good thing to have,” he added Pieter Abbeel, a professor at UC Berkeley and cofounder of Covariant, a company that uses AI and simulation to train robot arms to select and match objects for logistics firms. He said researchers in Switzerland and Nvidia “got good acceleration.”
AI shows promise for training robots to do tasks that are actually not easy to write in software, or that require some kind of adaptation. The ability to detect bad, slippery, or unfamiliar things, for example, is not something that can be written in lines of code.
4,000 simulated robots are trained to use learning to strengthen, an AI approach inspired by research on how animals learn through positive and negative feedback. As robots move their legs, an algorithm judges how it affects their ability to walk, and tweaks the algorithm’s restraints accordingly.