Tethered Aerial Visual Assistance

15 Jan 2020  ·  Xuesu Xiao, Jan Dufek, Robin R. Murphy ·

In this paper, an autonomous tethered Unmanned Aerial Vehicle (UAV) is developed into a visual assistant in a marsupial co-robots team, collaborating with a tele-operated Unmanned Ground Vehicle (UGV) for robot operations in unstructured or confined environments. These environments pose extreme challenges to the remote tele-operator due to the lack of sufficient situational awareness, mostly caused by the unstructuredness and confinement, stationary and limited field-of-view and lack of depth perception from the robot's onboard cameras. To overcome these problems, a secondary tele-operated robot is used in current practices, who acts as a visual assistant and provides external viewpoints to overcome the perceptual limitations of the primary robot's onboard sensors. However, a second tele-operated robot requires extra manpower and teamwork demand between primary and secondary operators. The manually chosen viewpoints tend to be subjective and sub-optimal. Considering these intricacies, we develop an autonomous tethered aerial visual assistant in place of the secondary tele-operated robot and operator, to reduce human robot ratio from 2:2 to 1:2. Using a fundamental viewpoint quality theory, a formal risk reasoning framework, and a newly developed tethered motion suite, our visual assistant is able to autonomously navigate to good-quality viewpoints in a risk-aware manner through unstructured or confined spaces with a tether. The developed marsupial co-robots team could improve tele-operation efficiency in nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments, by reducing manpower and teamwork demand, and achieving better visual assistance quality with trustworthy risk-aware motion.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here