Face-fit follows the idea of the Toolkit's Strike-a-pose app, i.e. asking the user to replicate an artwork, but concentrating on facial expressions that require a much more refined matching.
This application can be used on a mobile phone or on a PC, but due to technical limitations of some required computer vision libraries the Javascript version for mobile phones need to relieve some image processing functionality to a server. For this reason the app has been developed using two codebases: a Javascript one, with TensorflowJS, for mobile phones and a Python version using OpenCV and Tensorflow for the desktop app for museum installations.
Face-Fit App Interface
The application asks the users to replicate the pose of the head and the expression of some portraits by famous painters and transfer the face of the user on the artworks, generating a new image. The application was designed through a usability study carried out following an iterative design approach with three groups of 5 people. The user places himself in front of the smartphone or installation equipped with a camera. He is presented with a series of portraits' paintings in a vertical carousel. The user can choose the artwork to match. At that point the application presents a ghost image of the user's face that the user must try to super-impose on that of the painting to find a perfect match, see following figure.
App doesn’t ask for personal data, no logging of personal data, performing as much as possible computation on end-user devices, as explained in the Privacy Policy .
Face-Fit - Privacy Policy
The ghost image solution was the result of our usability study which solved some issues related to how to keep the user at the same time concentrated on the task without losing the fun of the game. At first, in fact, we had provided some visual suggestions to find the right pose but they distracted the user from the painting and therefore from the game.
User interaction: the ghost image gives feedback to the user to change his pose and expression to better match that of the artwork. The “ghost” image is the face of the user superimposed on the artwork.
A faster than real-time face mesh prediction network is used to obtain 468 3D points for each face, also when using mobile phones. The points are used to compute the pose of the whole face. Once the pose is matched, the position of eyes, eyebrows and mouth is matched. When both pose and facial expression match, the face of the user is substituted to that of the painting and the description of the artwork is provided.
Once the pose is matched the user obtains information on the artwork and can download the generated images for sharing on social networks. By entering your email, the system automatically sends images and in-depth content about the artworks managed by the museum. The user's data and email are not stored and are deleted from the system as soon as the email is sent, in accordance with User Data privacy.
In-depth content and generated images sent via email for sharing on social networks.
Museum is able to access the Admin Dashboard adding the artworks interacted with by users, and managing content associated with the artwork and to share with the user for more in-depth analysis of the artworks.
Face Fit - Admin Dashboard
Differently from Strike-a-pose the mobile version of the Face-fit application has some differences from the stand-alone version. This is due to the need to use some OpenCV functions, to perform the color correction of the images that allow to change the style of the images captured from the webcam to the style of the painting, that are not available in Javascript. This has required to implement these functionalities as RESTful APIs in Python, while the frontend is implemented in Javascript with Tensorflow JS, porting the Python code of the standalone version.
The Face Mesh neural network used is based on a MobileNetV2 architecture that can run in real-time also on mid-level mobile phones.
Usage Example
App requires a server to host the mobile app and to provide the RESTful APIs of the backend. A QR code can be used to avoid typing the URL of the web apps.
Guidelines for reuse
The simplest type of reuse is substituting the selected sample artworks with those of the collection of the museum/organization that desires to customize the apps, along with the associated information. Setup of the apps is based on Docker, to simplify the installation of the backend.
It is possible to extend the apps introducing new types of challenges, e.g. combining classes of artowks in Face-fit as it is done in Strike-a-pose, creating collections of artworks according to some criterion. The challenges of Strike-a-pose can be changed to follow some other criterion other than using full/upper/lower body parts, e.g. according to styles or time.
Changing the GUIs is relatively easy, since it is needed to update the HTML5 of Strike-a-pose or the Kivy code of Face-fit.
Source code
Free Codes of the App are available here:
https://github.com/ReInHerit/face-fit
Demo
Demo is available at this link (using Chrome or Firefox browser):
https://reinherit-facefit.herokuapp.com
For more information and support contact marco.bertini@unifi.it MICC - Media Integration and Communication Center, University of Florence, Italy