ONNX Empowers Underwater Plastic Waste Detection

0
835
detection of plastic on sea

The Open Neural Network Exchange (ONNX) format leverages deep learning algorithms to analyse underwater imagery and identify plastic waste in real-time. This helps to quickly prioritise areas with the highest concentration of plastic waste for clean-up efforts.

Marine pollution by plastics ranges in size from large material such as bottles and bags, down to microplastics formed from the fragmentation of plastic material. This affects the marine ecosystem deeply. Some creatures entangle in the plastic debris, while others like seabirds, turtles, fish, oysters and mussels ingest the plastics, clogging their digestive systems and causing death. Fish and birds mistake smaller plastic particles for food and feed on them, which affects the overall ecosystem too.

Currently, there are an estimated 50-75 trillion pieces of plastic and microplastics in the ocean. The plastic either ends up forming giant garbage patches or decomposes slowly into microplastics, which can enter the marine food chain and become incredibly damaging to sea life. The main source of ocean plastic pollution is land-based—80% of plastic in the ocean originates on land.

What is the Open Neural Network Exchange?

The Open Neural Network Exchange (ONNX) is an open source artificial intelligence ecosystem of tech firms and research institutions that develops open standards for describing machine learning algorithms and software tools to encourage innovation and cooperation in the AI field.

At its core, ONNX provides a standardised format for representing machine learning models. This means that different deep learning frameworks can share models seamlessly, saving valuable development time and improving model performance.

A model file that effectively depicts the link between the input data and the output predictions is the end product of a trained deep learning algorithm. Although it can be challenging to integrate a neural network into production systems, it is one of the most effective ways to create these predictive models.

These models are most frequently seen in data format (.pth or HD5) files. They should frequently be portable so that you can use them in contexts other than the ones where you trained the model. This is where ONNX is helpful. On a broad scale, ONNX is made to provide framework interoperability.

The process of plastic waste detection

ONNX simplifies the process of converting models between different frameworks. This means developers can more easily switch between libraries without having to start from scratch.
The ONNX runtime engine allows for high-performance inference across platforms.

PyTorch, TensorFlow, MXNet, and Caffe are just a few that have recently gained a lot of attention among the many outstanding machine learning libraries available in a variety of languages. Figure 1 shows a few such machine learning ecosystems.

Interoperability through ONNX across various machine learning ecosystems
Figure 1: Interoperability through ONNX across various machine learning ecosystems

The output of an object detector is a set of bounding boxes that enclose the objects in the image, along with class labels and confidence scores for each box. YOLOv8 pretrained Detect models are shown here. Detect, Segment and Pose models are pretrained on the COCO data set, while Classify models are pretrained on the ImageNet data set. Table 1 lists the various YOLOv8 versions and their relevant parameters.

Table 1: YOLOv8 versions with their corresponding specifications

Model Size(pixels) mAPval50-95 Speed (ms)
CPU ONNX 
Speed (ms)A100 TensorRT params(M) FLOPs(B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.2 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8

 

  • mAPval values are for single-model single-scale on the COCO val2017 data set. Reproduce by yolo val detect data=coco.yaml device=0
  • Speed is averaged over COCO val images using an Amazon EC2 P4d instance. Reproduce by yolo val detect data=coco128.yaml batch=1 device=0|cpu
  • In the proposed set up, we have exported the ocean waste object detection data set containing 5136 images that is implemented using Google Colab. The resulting ONNX format can detect plastic under water using Ultralytics YOLOv8.

Step 1: Installing the relevant libraries
Installing libraries is the basic step for any program. So install Ultralytics for YOLOv8 and Roboflow to directly take images and annotations from the platform itself. Figure 2 shows the libraries installed.

The libraries installed for the proposed task
Figure 2: The libraries installed for the proposed task

Step 2: Checking the proper installations for Ultralytics
Figure 3 shows the same. If there is any error found here then we should uninstall the library and reinstall it.

Checking for the proper installation of the libraries installed for the proposed task
Figure 3: Checking for the proper installation of the libraries installed for the proposed task

Step 3: Importing the desired data set from the Roboflow workspace directly using API keys
YOLOv8 is the latest version of YOLO by Ultralytics. As a cutting-edge, state-of-the-art model, YOLOv8 builds on the success of previous versions, introducing new features and improvements for enhanced performance, flexibility, and efficiency. It supports a full range of vision AI tasks, including detection, segmentation, pose estimation, tracking, and classification. This versatility allows users to leverage YOLOv8’s capabilities across diverse applications and domains. This set of codes helps in getting a data set from Roboflow directly in the form of code, and unzip the data and its annotations to the required format of YOLOv8. Figure 4 shows the snippet for the same.

Import data set from Roboflow, and unzip the data and its annotations
Figure 4: Import data set from Roboflow, and unzip the data and its annotations
Note: Beware of sharing this API key publicly. It is a private key linked to your Roboflow account.


Step 4: Saving the model into ONNX format

Figure 5 shows the various parameters stated. This is a CLI command for training the YOLOv8 model.

Save the model with the parameters listed into the ONNX format
Figure 5: Save the model with the parameters listed into the ONNX format

The parameters used here are:
1. Task = Detect – Indicates the detection task
2. Mode = Train – Indicates the training task
3. Model = YOLOv8l.pt – Indicates which model to use
4. Data = {dataset.location}/yaml – Indicates location of data and YAML file
5. Epochs = 20 — Indicates number of iterations
6. Imgsz = 256 — Indicates the size of image in training
7. Plots = True — Indicates display plots after running all the epochs
8. Optimizer = SGD – Indicates using stochastic gradient descent
9. Export = ONNX – Indicates saving the model in ONNX format

Results

All the results are stored in the runs/detect/train folder path. To use this model directly into any other machine learning ecosystem, we convert it into the ONNX format and then load in any machine learning ecosystem.

Result image
Figure 6: Result image
The model is detecting the plastic bags with 80% accuracy
Figure 7: The model is detecting the plastic bags with 80% accuracy

The images captured in Figures 6, 7, 8 detect the plastic bags in different kinds of water. It can be seen that the model is detecting the plastic bags under water with a good degree of accuracy.

Detection of different plastic wastes and nets near the sewage and open flow drains
Figure 8: Detection of different plastic wastes and nets near the sewage and open flow drains

Plastic waste detection under water is a pressing issue that can significantly benefit from the accuracy and speed of the deep learning models leveraged by ONNX. Its current relevance cannot be understated, as we continue to work towards sustainable solutions for the world’s growing plastic problem. Also, the compatibility of ONNX’s format with multiple platforms positions it as a valuable tool for the future.

LEAVE A REPLY

Please enter your comment!
Please enter your name here