Red Hen Lab

Research on Multimodal Communication

Technologies
python, big data science, computer vision, nlp, machinelearning
Topics
metadata, media, language, multimodal communication, gesture
Research on Multimodal Communication

Red Hen Lab is an international distributed cooperative of researchers in multimodal communication. We are senior professors at major research universities, senior developers in technology corporations. We also include junior professors, postdoctoral students, graduate students, undergraduate students, and even a few advanced high school students. Red Hen develops code in Natural Language Processing, ASR, audio parsing, gesture analysis, media analysis, computer vision, and multimodal analysis.

Red Hen's multimodal communication research involves locating, identifying, and characterizing auditory and visual elements in videos and pictures. We may provide annotated clips or images and present the challenge of developing the machine learning tools to find additional instances in a much larger dataset. Some examples are gestures, eye movements, and tone of voice. We favor projects that combine more than one modality, but have a clear communicative function—an example would be floor-holding techniques. Once a feature has been successfully identified in our full dataset of several hundred thousand hours of news videos, cognitive linguists, communication scholars, and political scientists can use this information to study higher-level phenomena in language, culture, and politics and develop a better understanding of the full spectrum of human communication. Our dataset is recorded in a large number of languages, giving Red Hen a global perspective.

For GSoC 2021, we invite proposals from students for components for a unified multimodal processing pipeline, whose aim is to extract information from text, audio, and video, and to develop integrative cross-modal feature detection tasks. Red Hen Lab is directed jointly by Francis Steen (UCLA) and Mark Turner (Case Western Reserve University).

2021 Program

Successful Projects

Contributor
Shreyan Ganguly
Mentor
Vera Tobin, Ahmed Ismail, Mark Williams, John Bell
Organization
Red Hen Lab
Machine detection of film edits
A film can be fundamentally broken into innumerous shots, placed after one another. These shots are divided by cuts. Film cuts can be broadly divided...
Contributor
Nitesh Mahawar
Mentor
Anna Wilson, Francis Steen, Frankie Robertson
Organization
Red Hen Lab
Multimodal TV Show Segmentation
I will continue from last year’s work and improve the clustering algorithm to the in-production code and enhance the previous work. The main problem...
Contributor
Hannes
Mentor
Ahmed Ismail, Karan Singla, Peter Uhrig
Organization
Red Hen Lab
Utilizing Speech-to-Speech Translation to Facilitate a Multilingual Text, Audio, and Video Message Board and Database
We design a simple pipeline for using state-of-the-art speech-to-text, text-to-text, and text-to-speech to create a speech-to-speech translation...
Contributor
Amr Maghraby
Mentor
Vera Tobin, Ahmed Ismail, Mark Williams, John Bell
Organization
Red Hen Lab
Machine detection of film edits
By using digital video, every day the number of people needs to edit and manipulate video content is to increase. This requires from us to have...
Contributor
Yunfei Zhao
Mentor
Anna Wilson, Mahnaz Parian, cpc, Javier Valenzuela, Daniel Alcaraz, Inés Olza
Organization
Red Hen Lab
Gesture temporal detection pipeline for news videos
Gesture recognition becomes popular in recent years since it can play an essential role in non-verbal communication, emotion analysis as well as...
Contributor
ankiitgupta7
Mentor
Maria Hedblom, cpc, Francis Steen, Mark Turner, Javier Valenzuela
Organization
Red Hen Lab
Simulating Representational Communication in Vervet Monkeys
Vervet monkeys (Cercopithecus aethiops) are said to give acoustically different alarm calls to different predators, evoking contrasting, seemingly...
Contributor
Yash Khasbage
Mentor
Daniel Alcaraz, Mark Turner, Karan Singla
Organization
Red Hen Lab
Anonymizing Audiovisual Data
The Audiovisual data present with Red Hen is not shareable. The visual data clearly shows the speakers involved, and a person can be recognized with...
Contributor
Nickil Maveli
Mentor
Suzie Xi, Mark Turner, Javier Valenzuela, Anna Wilson, Maria Hedblom, Francis Steen, Frederico Belcavello, Inés Olza, Tiago Torrent
Organization
Red Hen Lab
Detecting Joint Meaning Construal by Language and Gesture
The project aims to develop a prototype that is capable of meaning construction using multi-modal channels of communication. Specifically, for a...
Contributor
Lisardo Pérez Lugones
Mentor
Juan Jose Batalla Rosado, Jungseock Joo, Stephanie Wood
Organization
Red Hen Lab
Depicting of Graphical Communication Systems (GCS) in Aztec/Central Mexican manuscripts with Deep Learning: glyphic visual recognition and deciphering using Keras
Among all the human writing communication systems and inspired by Google Arts & Culture project Fabricius, we propose the creation of a framework to...
Contributor
Mohamed Mokhtar
Mentor
Vera Tobin, VaibhavGupta, Peter Uhrig, Gulshan Kumar, Anna Wilson
Organization
Red Hen Lab
Red Hen Rapid Annotator
Continuing the work done on Rapid Annotator 2.0 by Vaibhav Gupta, There were some improvements and new functionalities can be added to the current...
Contributor
Swadesh Jana
Mentor
Elizabeth Mahoney, Peter Uhrig
Organization
Red Hen Lab
Create a Red Hen OpenDataset for gestures with performance baselines
Red Hen OpenDataset contains annotated gesture videos from talk shows. The project requires to systematize the data for computer science researchers...
Contributor
Tarun Nagdeve
Mentor
Jungseock Joo, Stephanie Wood
Organization
Red Hen Lab
Development of a Visual Recognition model for Aztec Hieroglyphs
This project aims to Develop a Visual Recognition model for Aztec Hieroglyphs. Aztec language is pictographic and ideographic photo-writing. It has...