Watson visual recognition for quality control in production lines
Globalization and worldwide competition impose new challenges on the quality and cost of production lines. Based on IBM Watson Visual Recognition service I designed and implemented a simple Proof of Concept to offers a pattern recognition solutions for automatic quality control in production processes. The IBM Watson feature extraction and machine learning techniques are used to design classification systems for a variety of image-based inspection tasks.
Watson Visual Recognition allows users to understand the contents of an image or video frame, answering the question: โWhat is in this image?โ Submit an image, and the service returns scores for relevant classifiers representing things such as objects, events and settings. What types of images are relevant to your business? so my idea is to use the Watson visual recognition for quality control vision system for the automatic inspection of mechanical parts containing defects.
In details if the production lines assembles produces a grooved bushing model like this:
the quality control performed on high quality piece should have a high recognition score, as in this case, the piece is almost equal to the model, the recognition score is 0.08473 (scores range from 0 to 0.1 for my classifier)
conversely a control performed on a piece with minor imperfections should have a low score compared with the previous score, as in this case where the recognition score is 0.0387, on this bushing there is an imperfection, see the red arrow:
After creating the Watson visual recognition service on my Bluemix it is necessary to create a custom classifier, the visual recognition service can learn from example images you upload to create a new, multi-faceted classifier. Each example file is trained against the other files in that call, and positive examples are stored as classes. (Negative example files are not stored as classes.) These classes are grouped to define a single classifier, but return their own scores. I prepared a zip with positive examples images and a zip with negative examples images. I used a service rest API to create the classifier, where I uploaded different examples:
curl -X POST -F "[email protected]" -F "[email protected]" -F "[email protected]" -F "name=grooved" "https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers?api_key={XXXXXc91fabdd2d08fe075bb746654c9d8a46d80}&version=2016-05-20"
After the machine has completed the training phase I tried to classify an image. I tested submitting an image URL. To classify an uploaded image against the my class I used another service rest API:
curl -X GET -H "Accept-Language: en" "https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classify?&api_key={XXXXXc91fabdd2d08fe075bb746654c9d8a46d80}&url=https://goo.gl/zuzElj&classifier_ids=grooved_1242640198&owners=me&threshold=0&version=2016-05-20"
The API returns a response against my custom classifier, the classes identified in the image, and a score for each class. Scores range from 0 to 1, with a higher score indicating greater correlation.
After these manual preliminary tests I build my POC with node-RED to automate the following end-to-end scenario:
- I simulated a visual inspection system for high-speed checking of production lines (in my example the grooved bushing)
- The images are uploaded on a cloud storage
- A node-RED flow analyze periodically the uploaded images
- A quality score is calculated by Watson visual recognition API
- Maximo asset manager meters are updated with the piece produced and with the discarded pieces by the quality control
- A work order is opened against Maximo asset manager if the production lines issue too many low quality pieces
- A web dashboard shows in real-time the quality control
Here is my node RED flow:
The node-RED flows is based on the following logic sections:
- nodes to fetch the image files from a cloud storage, I’m using google drive, where the image are uploaded from IoT devices attached to the production line
- nodes to assemble and call the rest-API’s URL of Watson visual recognition service
- nodes to send data to dtweet.io service, I’m using the freeboard web dashboard with a dtweet datasource
- nodes to assemble and call the rest-API’s URL of Maximo meters update service. I’m calling two different meters based on the visual recognition score
Here is the demo web dashboard, currently my POC is up&running:
As many of you know I’m really enjoying to customize Maximo, here is my location application with the real-time quality control:
Here is the Maximo meters section associated to production line, a meter for the high quality pieces and a meter for the pieces with some defect:
I’m excited because I’m going to present this POC to a real customer ๐ this is one of the first real customer scenarios in which it’s possible to use a combination of Watson machine learning, the iot devices and a mashup of micro-services in node-RED.
I will update the blog as soon as I can, thanks to reading.
5 Comments