Robots are quickly becoming incorporated into our daily lives. Development of prototype robotic assistants such as the Willow Garage PR2, the NASA-GM Robonaut and the rise of commercial robotic products such as the Roomba, have demonstrated both interest and applications for robots that can function successfully in human environments. Important to the successful adoption of robots in human workspaces is the ability for the robot to work in semi-structured and even unstructured environments which are far different than current robotic workcells. Enabling this move is the ongoing research in vision guided robot control or, visual servoing which allows robots to operate within the “vision based” world that humans work in. Almost all examples of robot assistants to date incorporate one or more vision systems, typically a camera, which has been mounted on the robot.
One common problem associated with using a camera as the feedback sensor is losing sight of the object that the camera was viewing. In surveillance, a suspect may run away from the camera field of view. In a rescue mission an obstacle may occlude the victim from the camera. In such situations the robot needs to acquire new data to locate the lost object. The new data could be obtained from other sensor platforms if available; alternatively the robot could acquire new data by searching for the target, based on the past data it has collected.
Irrespective of the visual task at hand prior the target being lost, we want robots to find the lost target efficiently and then robustly locate it within a safe region of the acquired image. Search efficiency requires a high speed search through an optimized trajectory while robustness requires cautious transition between the completed search and the restarted visual task once target visualization is regained. This will equip robots with an algorithm to handle lost target scenarios and switch back to their visual tasks autonomously.
S. Radmard and E. a. Croft, “Overcoming occlusions in semi-autonomous telepresence systems,” in 16th International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, November 25 – 29, 2013.
S. Radmard, D. Meger, E. A. Croft, J. Little, “Overcoming Unknown Occlusions in Eye-in-Hand Visual Search,” IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, May 6 – 10, 2013.
S. Radmard, D. Meger, E. A. Croft, J. Little, “Overcoming Occlusions in Eye-in-Hand Visual Search,” American Control Conference, Montreal, Canada, June 27 – 29, 2012.
S. Radmard and E. A. Croft, “Approximate Recursive Bayesian Filtering Methods for Robot Visual Search,” in IEEE International Conference on Robotics and Biometrics, Phuket, Thailand, December 7 – 11, 2011.
T. Shen, S. Radmard, A. Chan, E.A. Croft, G. Chesi, “Motion planning from demonstrations and polynomial optimization for visual servoing applications”, IEEE International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, November 3 – 6, 2013.