Watson IoT Receptionist BOT
In this article I’m presenting the Watson IoT Receptionist BOT, It has been developed within the Watson IoT IBM Rome-Lab initiatives.
Alessandro Imparato (in the picture) has just joined to my team that has developed the idea, Alessandro has worked on the device image acquisition and IoT devices configuration, he has shared with so much enthusiasm the recent Watson IoT trends.
What do you think of when you hear the term receptionist ?
Being a receptionist has always been much harder work than people give it credit for. But fortunately, the perception of receptionists seems to be changing, due in part to the many new technologies automating office spaces.
The “typical” tasks of a receptionist are answering phones and greeting people who walk through the door. Of course, receptionists have always done many other things to keep offices running smoothly, but these are the tasks that most people think of first. In some companies, virtual receptionists and visitor management systems have replaced front-desk staff altogether.
So the idea is to find a first step towards a visitor management system using the IoT devices and visual recognition to propose a visitor registration management solution. A Bluemix cloud based solution to register, track and manage the visitors. Features include mail notifications as well as a real-time dashboard on which you can check the activities.
Well, let me introduce Watson IoT Receptionist BOT:
From hardware point of view the roBOT is composed by a Raspberry PI , the Sense HAT board, an ELP Usb Camera, plus a wooden box. I will dedicate a specific post about devices details.
The roBOT’s functionalities are implemented by Bluemix using Watson Visual Recognition service, the IoT Platform service, the Cloudant service and the guest start service dweet.io to feed the web dashboard.
The first scenario addressed by Watson IoT Receptionist BOT is the following:
- A visitor arrives to the office reception.
- The visitor approaches the roBOT and puts his face close to the camera.
- The visitor clicks on Sense HAT joystick, the board led changes color when the photo is taken.
- By IoT platform services the Raspberry PI sends the photo to Bluemix cloud.
- Visual recognition service works on the photo.
- The dashboard shows the person with best recognize score plus some information stored in database (title, company, notes…).
- An e-mail with these information is sent to the human receptionist that enable the person to visit the office.
Of course at the moment there are security issues to solve before to fully automate the process and to allow to the visitor to enter in the office without a human check.
Before to visit the office the visitors must send a set of photos that will be used for the recognition for all future visits to the office. The photos are used to training the visual recognition machine learning. You can find more technical details in my previous posts here and here.
To implement the visitor visual recognition I created a custom classifier where any class is a visitor to recognize.
For my test I do not have a real photographic album yet, so I figured that known characters visit us and I trained the visual recognition machine learning with their public photos. So I put in my custom classifier the Elvis Presley class, the Freddy Mercury class, the Groucho Marx class, the Jim Morrison class and more others 🙂
Of course I trained the service with some personal gurus:
Miles Davis
Linus Torvalds
Groucho Marx
To train the visual recognition machine learning I provided some negative examples too, I used more or less 50 photos very different from a human face.
Negative examples
Moreover to complete the scenario I stored some basic data as title, company and notes for any visitor classified using a Cloudant db. Cloudant is no SQL database, it works with self-describing JSON documents through a REST API that makes every document in your Cloudant database accessible as JSON via a URL.
Documents can be retrieved, stored, or deleted individually or in bulk and can also have files attached:
From software point of view to implement the end-to-end scenario we used two nodeRED flows.
Alessandro has developed the first one, the flow runs on a nodeRED server installed on Raspberry PI:
Here are the main flow’s features:
- Camera handling and photo acquisition.
- Photo publishing on internal web server.
- URL image publishing on IoT platform.
The second one runs on Bluemix:
Here are the main flow’s features:
- Fetching image URL from IoT platform service.
- Calling the visual recognition API against the image acquired.
- Calculating the max recognition score.
- Retrieving visitor information from Cloudant db.
- Publishing information using dweet.
- Feeding the web dashboard.
Well, how to test the Watson IoT Receptionist BOT ?
The Watson IoT Receptionist BOT acquires visitors image from camera, the visitor takes a photo touching the Sense HAT joystick, but this method does not fit for my initial tests:
Therefore I printed the visitor’s image and I put the print on my face, then I took a photo 🙂 Of course the images used during the tests are different from images used during the visual recognition machine learning training phase.
Here the test result on freeboard web dashboard:
test result for Miles Davis:
test result for myself:
As I said before there are security issues to fully automate the process so an e-mail with the same dashboard information is sent to the real receptionist to allow the access to the visitor.
Bye Bye from Watson IoT Receptionist BOT 🙂
great work