AI in Aerial Drone Mapping: How to Use It to Classify Trash

Hello everyone! I'm working on an object detection AI with a focus on sustainability and the environment.

I'm going to survey a large area of land and map it through drone imaging. Then, I'll develop an AI that identifies and classifies trash using an aerial drone mapping. I'll then use the data to find the densest areas of garbage, so that cleanups can be scheduled more effectively.

How to Conduct Aerial Drone Mapping

Choose a Location

I chose a specific three mile stretch of the Napa River. The location is accessible by foot, so conducting cleanups will be easier. The foliage in the area is also very thin, allowing the AI to recognize trash more accurately. The last reason I chose this area is because it is commonly used and walked through, so the impact of the project will be greater.

Choose Your Drone

I'll be using a DJI Mavic Air 2 drone for the surveying. Since the drone weighs over 250 grams, I'll have to register it with the FAA (Federal Avionics Administration). In addition, there are many restrictions on where I can fly, and there are many fight zones.

As a recreational drone pilot, I can only fly in Class G and E airspace without authorization. However, the area I'd like to survey is Class D, as the Napa Airport is nearby. Luckily, using the B4UFly and Kittyhawk apps, I'm able to obtain LAANC(Low Altitude Authorization and Notification Capability) authorization for under 300 feet.

Choose an Artificial Intelligence Model

I'm using YOLO(You Only Look Once) as my object detection AI, and training it on the TACO(Trash Annotations In Context) dataset. I'm using Google Collaboratory to run YOLOv5, Roboflow to analyze and modify my dataset, and Weights and Biases to visualize metrics. 

I downloaded the dataset from http://tacodataset.org, and chose to add in the unofficial annotations. However, I went through the dataset and adjusted annotations as I saw fit. 

I'm starting off with a standard un-augmented 640 by 640 pixel dataset, and testing results on each of the various YOLOv5 architectures. I'm also varying batch size, epochs, and other parameters and hyperparameters in a google spreadsheet you can see here:

https://docs.google.com/spreadsheets/d/1qwhHn9A8-Id2Pt7u9xMALV1yVJUYNZNioZMQGzbDB78/edit?usp=sharing

I'm continuing to work on the AI and improve it's performance. There are many more options I can try, and I'll keep this updated as I go along!

Assessing Your Impact

Traditional trash surveying methods consist of manually traversing a region and estimating the amount of trash in the area. This method normally doesn't allow for the classification of trash or the gathering of their locations or densities, and is often time intensive.

By using aerial drone mapping and artificial intelligence, we can improve the efficiency and quality of these surveys. We use a drone to obtain an aerial, top down view of the survey area. By using an object detection artificial intelligence like YOLOv5 after training it on a custom dataset such as the TACO Trash Dataset, we remove the need for human identification.

The AI can classify the trash into categories and is able to give the exact location of an object, which allows for more precise statistics such as the spread of trash type and trash density. This method of trash surveying requires less time, provides more information, can be automated, and is more accurate.

Previous
Previous

High School Student Blog: Project Bridging AI and Mental Health

Next
Next

Discord Bot: What is It and How to Create One on Discord