Excerpt: A video/image analysis platform called Maximo Visual Inspection (MVI) has built-in deep learning models that can classify and identify objects in pictures and video streams.
In this article, we will be looking at:
- Introduction
- Basic of COCO
- Creation of a dataset
- Installing and implementing LabelMe
- Converting the dataset to COCO format
- Importing COCO datasets into Maximo Visual Inspection
- Winding it up
Table of Contents
Introduction:
What happens when you want to work with data outside of MVI? You wish to alter or add specific images to the dataset before importing it. You might want to maintain things in a standard format for those coworkers who don’t have access to MVI. Maybe other tools engage with these datasets as well? Nobody else is required to utilize the dataset format specified by MVI. Datasets like COCO can also work along with MVI datasets and train MVI models now!
Basics of COCO
The COCO Dataset: Why?
The COCO dataset, which stands for Common Objects in Context, aims to represent a wide range of items that we frequently come across in daily life.
- Computer Vision Benchmark
In order to train supervised computer vision models that can recognize the common objects in the dataset, the COCO dataset has been labeled. The COCO dataset serves as a baseline for assessing the ongoing development of these models through computer vision research because, of course, they are still far from flawless.
- A Transfer Learning Checkpoint
The COCO dataset was created in part to serve as a foundational dataset for computer vision model training. The model can be adjusted to learn various tasks using a custom dataset after it has been trained on the COCO dataset.
For Candidates who want to advance their career, Maximo Training is the best option
Features of COCO
- 1.5 million object instances
- 250,000 people with key points
- 330K images (>200K labeled)
- 5 captions per image
- 80 object categories
- 91 stuff categories
- Object segmentation
- Recognition in context
- Superpixel stuff segmentation
Facts and metrics from the COCO Dataset
- COCO Dataset Tasks
The following computer vision tasks from the COCO dataset are listed in increasing order of commonality:
- Keypoint Detection: Humans are marked with key points of interest in keypoint detection (elbow, knee, etc.)
- Semantic segmentation: Class labels and masks are used to identify object boundaries and object classes.
- Object detection: Bounding boxes and class labels are added to objects.
- COCO Dataset Facts
- Object detection:
- There are 121,408 images in the COCO Dataset.
- There are 883,331 object annotations present in the COCO Dataset.
- The 80 classes in the COCO Dataset
- The average picture ratio in the COCO Dataset is 640 by 480.
- Semantic segmentation:
- Panoptic segmentation demands that models create boundaries between things.
- Keypoint detection:
- 250,000 persons had their key points documented.
What Is The Format of the COCO Dataset?
The COCO dataset is delivered in a specific format known as COCO JSON.
- Info: Gives a high-level overview of the dataset.
- Licenses: Lists the picture licenses that are applicable to the dataset’s images.
- Categories: Lists the categories and supercategories available.
- Images: Provides all of the dataset’s image data but without any segmentation or bounding box information.
- Annotations: This section offers a list of each specific object annotation from every image in the dataset.
Creation of The Dataset
We are using cats as an example here. So, in order to build a bounding-box object identification model in IBM Maximo, we’ll be using some cat photographs. We are confident that you will be developing datasets for much more relevant deep learning applications, such as identifying manufacturing flaws or categorizing cancerous and healthy tissue.
Installation and Implementing LabelMe
The images that will be included in our dataset are already available. The next step is to label it. Therefore, let’s begin installing LabelMe.You may even use Anaconda, as it is frequently used because of its user-friendly environment system. Regardless of the package manager you pick, installing LabelMe is rather simple.
If you’re going with Anaconda, begin by creating a new conda environment and then activate it.
Pip install LabelMe into the new conda environment
When it gets installed, start LabelMe.
Select Open Dir, then select the directory containing your future dataset. You should see the first picture in the middle and a list of all the pictures in the directory thanks to LabelMe (probably at the bottom right). Now you can add bounding boxes, polygons, or categorization labels to your image.
After labeling this image, move on to the next one, and be certain to save the XML file with the label data each time you label. You should have a decent name automatically filled in by LabelMe.
You may be thinking if there were keyboard shortcuts to help out with LabelMe. You are in luck! LabelMe has certain keyboard shortcuts that will help you in your whole process easily.
Converting The Dataset Into The Coco Format
We need to change the dataset from the format of LabelMe to COCO now that it has photos and labels. There is a handy script called LabelMe2coco.py that does all the labor-intensive work. You can just pick it up on GitHub!
After the repository has been cloned, we can run the script and pass it the directory name for our dataset. The script will go through each LabelMe XML file, read the label data contained within, and then produce a single JSON file containing all of the label data in COCO format.
Our pictures, LabelMe XML label files, and COCO XML label files should now be in a directory. Before importing that dataset into MVI, we need to zip up the necessary files.
Our dataset is finally ready to be uploaded into MVI.
Importing COCO Datasets Into Maximo Visual Inspection
We must first create the dataset in MVI before importing the data from our COCO annotated dataset. Access MVI, then go to the Datasets page.
- In the top navigation bar, select Datasets.
- Once you’ve given it a name, click on Create new data set.
- Press Create.
Note: Avoid using the “Load.zip file” option to import our COCO dataset. For datasets in the MVI format, such an option is available. MVI anticipates that we will first construct a new dataset and then import our data into the COCO format. We should now be viewing our blank dataset in MVI.
- Choose “Import Files” from the menu.
- Choose and upload the .zip file you prepared in the previous section.
- Once everything has been uploaded, we can move on. Like any other dataset, we may use our unique COCO dataset in MVI.
Winding It Up
The world today can never work without data, and with the trillions of data available to us, it is difficult to manage them without the help of these helping hands like MVI and COCO datasets. The COCO dataset is used for keypoint recognition, captioning, and other tasks in addition to object detection and segmentation. This implies that the COCO dataset can assist in numerous problem-solving efforts. In order to readily change or modify datasets while using a standardized format, the dataset is represented as a JSON file. It is worthwhile to explore and maybe even incorporate the COCO dataset into your model because it stands out among AI achievements.
By automating image capture and analysis in the majority of asset situations, Maximo Interview Questions Visual Inspection can assist your company in making a considerable advancement toward the aim of predictive maintenance. This will strengthen your ability to respond quickly to reliability, quality, and safety issues, which will boost your productivity, decrease downtime, increase safety, and better meet SLA requirements.
We hope that this article has given you the key to unlocking infinite opportunities in the AI world.
Creating and importing COCO Datasets into MVI is a simple process to follow, and we hope it has made it even simpler.