In the tech industry, we all are witnessing the boom of Artificial Intelligence because of the contribution of tech giants like Google. Recently, Google, in conjunction with the researchers from the University of California, Berkely offered a framework that blends model-based control with learning-based perception. Once applied to the wheeled robot, this combination empowers robots to navigate autonomously around the obstacles. The news was released in a study published on the preprint server of Arxiv.org.
According to the creators of the framework, the technology is well suited for people and unobserved buildings both in simulation and in real conditions, and that it contributes to better and more efficient data behavior compared to a strictly learning-based approach.
Explaining the use of these autonomous navigating robots, the researchers explained that these robots have the potential to be employed in many mission-critical robotic applications. For instance, these can be used as from service robots to deliver medicine and food or for logistics and search robots to carry out rescue operations. The applications that require robots to work safely among people and act according to the actions of a human. For this, these robots are needed to observe human activities first and then make the decision of their movements. To better understand their function, take the example of a robot that is moving in a corridor a road with people around him. Now, if the person walking ahead takes a turn to his right side, the robots need to observe this behaviour and skip the individual to the left so as not to cut off the person. In another scenario, if the person keeps moving on the same path as of the robot, then the robots should keep an appropriate distance from the person.
To make this model work, the researchers used a data set called HumANav – Activate Navigation Dataset. The data set comprises of six thousand scans of synthetic, but realistic people, located in office buildings. To gather this data of synthetic people, the researchers used SURREAL Dataset, that had the pictures of human persons in a variety of various poses, body shapes, floors, and lighting conditions but the movements shown in images were more realistic, as if they were shown standing, walking, swinging, running, performing acrobatics and siting, with adjustable variables. Also, the building mesh scan was brought from the Stanford Large Scale 3D Indoor Spaces open dataset.
The use of scans enables users to manipulate human agents in the office scenario and offer robots a photorealistic image through the usual camera to make certain that important visual signals associated with the movement of a person are present in the pictures. For example, the fact that when someone walks fast, his legs will be farther apart than if they were moving slowly.
To test the functionality of the robots with this applied framework, the researchers carried out an experiment. In the trials, they produced 0.18 million samples and trained the model – LB-WayPtNav-DH – on 0.125 million of them in the simulation. When the samples were put on the Turtlebot 2 without additional training or fine-tuning, the model demonstrated actions that considered the dynamic nature of humans, and it surpassed ten trials. In one of the cases experimented, the robot deliberately averted a collision with an individual who was walking in the opposing direction.
To make the operation of robots more reliable, the team says that they have used improved structures that don’t need denotative judgment to access the state or predict the trajectory of humans. In fact, the structure leads to a smoother path that was not these in former work. In addition, the team explained that the agent has the ability to learn to intellect the dynamic nature of humans considering the upcoming movement in the moments of planning their own path.
While talking about their future plans, the co-author of the project said that they are interested in studying the richer navigational behavior in a more crowded and complex environment. With that, there is another future direction that they plan to move in, and that is about evaluating the robot’s state in the presence of noise.
Google is not the only technology giant conducting research in the field of autonomous robotics. Rather, many other big firms are making their contribution. Facebook lately released a simulator – A.I. Habitat. The simulator has the ability to train agents of Artificial Intelligence who embody things like a home robot to work in environments that mimic real apartments and offices. Not only this, Amazon researchers also described a home robot that inquires humans when it doesn’t know the direction to move in, in an article published last December. In addition to these tech giants, small robot companies are playing their part too. This is the reason we are having efficient mobile robotic manipulators working in our production units and in our everyday surroundings too.