lets_face_it

This is the official implementation for IVA'20 Best Paper Award paper "Let's Face It: Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings"

This project is maintained by jonepatr

Let's Face It

Probabilistic Multi-modal Interlocutor-aware Generation of Facial Gestures in Dyadic Settings

This paper received the Best Paper Award at IVA’20.

Short video summarizing the paper



Abstract

To enable more natural face-to-face interactions, conversational agents need to adapt their behavior to their interlocutors. One key aspect of this is generation of appropriate non-verbal behavior for the agent, for example facial gestures, here defined as facial expressions and head movements. Most existing gesture-generating systems do not utilize multi-modal cues from the interlocutor when synthesizing non-verbal behavior. Those that do, typically use deterministic methods that risk producing repetitive and non-vivid motions. In this paper, we introduce a probabilistic method to synthesize interlocutor-aware facial gestures – represented by highly expressive FLAME parameters – in dyadic conversations. Our contributions are: a) a method for feature extraction from multi-party video and speech recordings, resulting in a representation that allows for independent control and manipulation of expression and speech articulation in a 3D avatar; b) an extension to MoGlow, a recent motion-synthesis method based on normalizing flows, to also take multi-modal signals from the interlocutor as input and subsequently output interlocutor-aware facial gestures; and c) a subjective evaluation assessing the use and relative importance of the input modalities. The results show that the model successfully leverages the input from the interlocutor to generate more appropriate behavior.

FLAME facial feature dataset

Please contact me via email to get access to the dataset.

The dataset is 6.4GB and the features are provided in 25fps.
The data is organized in the following structure:
sessions/{1,2...54}/participants/{P1,P2}

tf_exp - expression parameters
tf_pose - neck, eye, and jaw rotation parameters
tf_shape - facial shape parameters
tf_rot - global rotation
tf_trans - global translation

Video samples

You can find some video samples that were used in the user studies here. More video samples with different model settings and sound will be available soon, please see the short summary video in the meantime for some more samples.

Citing

@inproceedings{jonell2020let,
    author = {Jonell, Patrik and Kucherenko, Taras and Henter, Gustav Eje and Beskow, Jonas},
    title = {Let's Face It: Probabilistic Multi-Modal Interlocutor-Aware Generation of Facial Gestures in Dyadic Settings},
    year = {2020},
    isbn = {9781450375863},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://dl.acm.org/doi/10.1145/3383652.3423911},
    doi = {10.1145/3383652.3423911},
    booktitle = {Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents},
    articleno = {31},
    numpages = {8},
    location = {Virtual Event, Scotland, UK},
    series = {IVA '20}
}