Week 6


This week, Christina and I implemented a Convolutional Neural Network in TensorFlow and Keras to recognize the sign language alphabet. After four epochs, our accuracy reached ~91% on the testing data. With a more extensive/diverse dataset, the use of more regularization techniques, and training longevity, it’s likely that this number could reach somewhere in the upper nineties.

Afterward, we moved on to real-time object detection. Ji Young helped us get started with YOLO (“You Only Look Once”), an object recognition system that operates in real-time and, in many ways, outperforms competing algorithms (including the historically popular R-CNN). With this algorithm in mind, we began to work on our final project, which is centered around real-time drone-to-drone detection. After Christina found a dataset of hundreds of labeled images of drones, I wrote a short Python script which retrieves and formats some important data from each image/label to write to a file, which we then feed into YOLO. After doing so, I began to train the data set.

Looking forward, I plan to implement some regularization techniques into the YOLO network that we’re using for our data. Since a large focus of my literature review is the optimization of deep networks to improve their ability to generalize new data, I believe that it would be beneficial to incorporate this into our final project.

Leave a Comment