In my previous post, we wrote some code to determine if we had Doug in an image. But we were not comparing Doug against anyone else. Now it’s time to make a small tweak to train the system to recognize Doug when we give it an image of someone else – like Bill.
The code for this will be very similar as last time, but now with the risk that we could get a different person than we expect. So how do we compare one person against another?
The Azure Face API makes this pretty easy. The API “identify” call returns a person candidate object, which contains a GUID for that person. We can use that GUID to call the Get method on the PersonGroupPerson API to retrieve the correct person. This allows us to have a pretty simple method that returns a person within an image.
The overall code is for this version is very similar to what we’ve done so far:
- Create a faces client
- Create a person group
- Create two people (Doug & Bill)
- Add images of Doug to person “Doug”
- Add images of Bill to person “Bill”
- Train the image recognizer
- Test recognizer to identify a particular person
- Let the recognizer tell us who it thinks it found