Detecting Joint Meaning Construal by Language and Gesture
- Mentors
- Suzie Xi, Mark Turner, Javier Valenzuela, Anna Wilson, Maria Hedblom, Francis Steen, Frederico Belcavello, Inés Olza, Tiago Torrent
- Organization
- Red Hen Lab
The project aims to develop a prototype that is capable of meaning construction using multi-modal channels of communication. Specifically, for a co-speech gestures dataset, using the annotations obtained manually coupled with the metadata obtained through algorithms, we devise a mechanism to disambiguate meaning considering the influence of all the different modalities involved in a particular frame. Since only a handful of annotated datasets have been made available by Red Hen, we leverage semi-supervised learning techniques to annotate additional unlabeled data. Furthermore, since each frame could have multiple meaning interpretations possible, we use human annotators to annotate a subset of our validation set and report our performance on that set.