Below is an overview of some of the projects we are working on in the Nimbus lab.  We are working on projects in the following areas:

  • Safe, precise and repeatable maneuvers
  • Failure detection and recovery
  • Extended flying autonomy
  • Adaptive sensing
  • Teaming and coordination
  • Capturing and synthesizing user expertise
  • Automated extraction of system specifications
  • Mission planning and analysis
  • Underwater robots and sensor networks
  • Reducing Failure Rates of Robotic System through Inferred Invariants Monitoring

Reducing Failure Rates of Robotic System through Inferred Invariants Monitoring


System monitoring can help to detect abnormalities and avoid failures. Crafting monitors for today's robotic systems, however, can be very difficult due to the systems' inherent complexity. In this work we address this challenge through an approach that  automatically infers system invariants and synthesizes those invariants into monitors. The approach is novel in that it derives invariants by observing the messages passed between system nodes and the invariants types are tailored to match the spatial, temporal, and operational attributes of robotic systems. Further, the generated monitor can be seamlessly integrated into systems built on top of publish-subscribe architectures. An application of the technique on a system consisting of a unmanned aerial vehicle landing on a moving platform shows that it can significantly reduce the number of crashes in unexpected landing scenarios.

This work is partially supported by NSF CNS-#1217400 and AFOSR #FA9550-10-1-0406.

Co-Aerial-Ecologist: Robotic Water Sampling and Sensing in the Wild



The goal of this research is to develop an aerial water sampling system that can be quickly and safely deployed to reach varying and hard to access locations, that integrates with existing static sensors, and that is adaptable to a wide range of scientific goals. The capability to obtain quick samples over large areas will lead to a dramatic increase in the understanding of critical natural resources. This research will enable better interactions between non-expert operators and robots by using semi-autonomous systems to detect faults and unknown situations to ensure safety of the operators and environment.

This research is partially funded as part of the National Robotics Initiative through  the USDA National Institute of Food and Agriculture (grant #2013-67021-20947).

Predictable Run-time Monitoring

We are working on a framework that can provide predictable run-time monitoring for UAV applications. Simply put, predictability in run-time monitoring requires that 1) detection latency is bounded from above and 2) resource for run-time monitoring is controlled. Previously, we developed the theory to do this, however, now we are working on applying this to our UAVs that are running ROS. This is particularly challenging since, as stated on the ROS website, ROS is not designed for real-time applications. Therefore, we are adapting our theoretical work to this real system to still guarantee real-time performance for ROS programs.

For formal proof of schedulability, we need to obtain some task parameters, such as periods (or minimum inter-arrival times), WCET (Worst-Case Execution Time). Unfortunately, this seemingly easy job is not easy. Due to cache effects and I/O operations, the WCET of a task could vary within a large range. If we simply use the measured maximum value of execution time as the WCET, chances are that it is so pessimistic that the system cannot pass the formal schedulability test (but it is still running well in practice!) In some real-time systems, people disable cache and limit I/O operations to get predictable timing info, but we want to avoid doing this as it reduces performance in practice.

This project is sponsored, in part, by NASA.

Adaptive and Autonomous Energy Management on a Sensor Network Using Aerial Robots

This research introduces novel recharging systems and algorithms to supplement existing systems and lead to autonomous, sustainable energy management on sensor networks. The purpose of this project is to develop a wireless power transfer system that enables unmanned areal vehicles (UAVs) to provide power to, and recharge batteries of wireless sensors and other electronics far removed from the electric grid. We do this using wireless power transfer through the use of strongly coupled resonances. We have designed and built a custom power transfer and receiving system that is optimized for use on UAVs.  We are investigating systems and control algorithms to optimize the power transfer from the UAV to the remote sensor node.  In addition, we are investigating energy usage algorithms to optimize the use of the power in networks of sensors that are able to be recharged wirelessly from UAVs.

Applications include powering sensors in remote locations without access to grid or solar energy, such as: underwater sensors that surface intermittently to send data and recharge, underground sensors, sensors placed under bridges for structural monitoring, sensors that are only activated when the UAV is present, and sensors in locations where security or aesthetic concerns prevent mounting solar panels.  See the project webpage for more details.  This work is partially supported by the National Science Foundation.

Crop Surveying Using Aerial Robots










Surveying crop heights during a growing season provides important feedback on the crop health and its reaction to environmental stimuli, such as rain or fertilizer treatments.  Gathering this data at high spatio-temporal resolution poses significant challenges to both researchers conducting phenotyping trials, and commercial agriculture producers. Currently, crop height information is gathered via manual measurements with a tape measure, or mechanical methods such as a tractor driving through the field with an ultrasonic or mechanical height estimation tool. These measurement processes are time consuming, and frequently damage the crops and field. As such, even though crop height information can be extremely valuable throughout the growing season, it is typically only collected at the end of growing season.

ROS Glass Tools

Robot systems frequently have a human operator in the loop directing it at a high level and monitoring for unexpected conditions.  In this project we aim to provide an open source toolset to provide an interface between the Robot Operating System (ROS) and the Google Glass.  The Glass acts as a heads up display so that an operator monitoring vehicle state can simultaneously view its actions in the real world.  In addition to monitoring the project aims to harness the voice recognition of the Glass to allow robot voice control.  The project also aims to be easily extensible so it can be used to monitor and control a multitude of robot systems using the Glass.  More information on the tools can be found at

UAV Operation in the Wild

UAV Operation in the Wild

The goal of this project is to enable operation of UAVs in the wild. To demonstrate the system we are developing the hardware, systems, and algorithms to enable catching a ball thrown at a UAV in an unstructured environment (with no motion capture system).  To this end, we are investigating: 1) vision algorithms for detecting and tracking objects; 2) models of the ball trajectory to enable fast trajectory estimation after only a few frames of information; 3) collision avoidance and trajectory intersection algorithms to enable avoiding obstacles and intersecting the trajectory of balls with uncertain information; and 4) algorithms to aggressively control the trajectory of the vehicle with little absolute position feedback information.
This project is sponsored, in part, by Ascending Technologies

Virtual Cage for Cost-Effective UAV Prototyping

Research and experimentation with UAVs/MAVs imply situations of potential damage of those systems every time an undesirable problem arises: erroneous algorithms, hardware or components failures, pilot error, unpredictable and/or misunderstood physical models, etc. Some of these problems can produce crashes and, consequently, delays fixing the system or even the system destruction.To avoid those unfortunate situations, we have developed the Virtual Cage system. It supports the work of researchers by constraining the space in which the UAVs can operate, preventing the UAV from crashing against objects outside of a defined workspace. The Virtual Cage requires only a downward facing camera, and some way of tracking altitude and attitude.  Safety constraints are calculated based on the specifications of the MAV and its sensing capabilities so that a deviation from the workspace area will be detected and the MAV will follow a configurable action to hover in place, land, or return to the workspace.

This project is sponsored, in part, by the Air Force Office of Scientific Research.

Mining Temporal Specifications for ROS-based Applications

Modern software systems are complex and large. This makes it challenging for a software programmer to develop a good understanding of the system components that are required for the system maintenance and also give assurances about their intended behavior. Formal software specifications are useful aids to the system understanding, but developing the specifications as well as maintaining them as systems evolve in the future are nontrivial tasks. If performed manually, they put extra burden on the programmer. These tasks are particularly challenging for distributed systems in which components may evolve independently and interact with each other in a complex way.

ROS-based applications are distributed and in these applications, multiple nodes interact with one another using messages. In this project, we are developing novel static and dynamic program analysis techniques to identify temporal aspects of complex interactions that happen among ROS-nodes to help programmers develop a deeper understanding of them. Our analysis techniques extract temporal specifications that can be used for documentation, test data generation, and application monitoring.

This project is sponsored, in part, by the Air Force Office of Scientific Research.

IR Relative Localization (IRLP)

Most high-accuracy robot localization systems require some form of infrastructure, typically in the form of a camera network that can tell a robot its location.  In outdoor environments, GPS can be used for localization but it’s accuracy can vary by several meters.  The infrared relative localization project (IRLP) aims to achieve high-accuracy, real-time relative localization in GPS-less environments with no infrastructure using infrared (IR) light. We use an array of IR emitters and receivers to obtain precise relative location information by using both trilateration (using range information based on signal strength) and triangulation (using angle information obtained from the line-of-sight nature of the emitters).  We are developing the hardware and software to enable localization onboard sets of UAVs so groups of UAVs can work closely and precisely in unstructured environments.

This project is sponsored, in part, by the Air Force Office of Scientific Research.