Robots are good at making identical repetitive movements, such as a simple task on an assembly line. (Pick up a cup. Turn it over. Put it down.) But they lack the ability to perceive objects as they move through an environment. (A human picks up a cup, puts it down in a random location, and the robot must retrieve it.) A recent study was conducted by researchers at the University of Illinois at Urbana-Champaign, NVIDIA, the University of Washington, and Stanford University, on 6D object pose estimation to develop a filter to give robots greater spatial perception so they can manipulate objects and navigate through space more accurately.
* This article was originally published here