Multi-agent systems have the potential to change how real-world problems are solved with robotics. Intuitively, a multi-agent system - or swarm - of drones can reduce the time needed to complete a task by distributing objectives between the agents that make up the system. Alternately, swarms can be used to scale tasks - more area can be covered, for example in environmental monitoring, or delivering additional provisions during a mountain-top resupply. The application potential is vast but coordination and control of a robotic swarm in the real-world is difficult and much work has gone into developing multi-agent control and coordination schemes.
Although multi-agent systems (particularly unmanned aircraft systems (UAS)) have entered the public eye in the entertainment sector, for example in drone lightshows, the agents that make up these systems are simple and may have very little onboard autonomy. Rather, they may follow a pre-planned path without the ability to adapt to what other agents in the swarm are doing, or to changes in the environment. In contrast, distributed, or decentralized control schemes seek to accomplish high-level objectives by allowing agents to individually sense each other, collectively plan how to accomplish their task, and then act on their plan. Many swarm controllers have been devised, but most are not deployed and tested in a real-world, uncertain environment with limited communication.
In this project we are developing a testbed of intelligent multicopters capable of operating in a real-world environment for testing and improving various swarming algorithms. We address challenges resulting from extremely limited onboard communication, mesh networking, violation of design assumptions, environmental uncertainty, and dynamic mission objectives. A custom mothership deployment mechanism transports 4 smaller agent drones to a desired location allowing the swarm to more efficiently utilize energy. Larger swarm agents have onboard camera systems for testing novel perception capabilities including target identification and tracking, and counter-UAS objectives. As part of the testbed we have developed a suite of tools specifically for rapid swarm deployments including fast, consistent configuration, custom command and control interfaces, and custom communication backend for interacting with the collective swarm.
To showcase the capabilities of our testbed we have deployed a novel hierarchical reinforcement learning control strategy for sub-tasking groups of agents in the swarm. We have also deployed and tested a drone identification classifier for use in counter-UAS missions with our collaborators in the IMAGE+Signal AnalysisLaboratory.