Most cyber-physical human systems (CPHS) rely on users learning how to interact with the system. Rather, a collaborative CPHS should learn from the user and adapt to them in a way that improves holistic system performance. Accomplishing this requires collaboration between the human-robot/human-computer interaction and the cyber-physical system communities in order to feed back knowledge about users into the design of the CPHS. The requisite user studies, however, are difficult, time consuming, and must be carefully designed. Furthermore, as humans are complex in their interactions with autonomy it is difficult to know, a priori, how many users must participate to attain conclusive results. In this project we are working toward a new strategy that augments current shared control, and provide a mechanism to directly feed back results from the HRI community into autonomy design.
The NIMBUS Lab is conducting research into pilot loss of control, which is a leading cause of UAS crashes. We present a novel control authority switching system that assesses the potential for pilot loss of control and makes a decision about whether or not to hand control to a provably safe computer control system to avoid loss of control and a resulting crash. The switching algorithm models the pilot-UAS interaction as a Markov Decision Process (MDP), accounting for the stochastic nature of the user’s control inputs and vehicle state. To train the MDP we utilized 13 UAS pilots ranging in skill level and conducted 103 flights of varying difficulty, collecting all position and orientation data alongside user skill level. We make this corpus of data publicly available to encourage more research into UAS loss of control and crash events. Finally, we evaluate the effectiveness of our MDP policy on the remaining (non-training) flight data, demonstrating a 75% success rate in predicting the outcome of the flight.