Research on Multimodal Communication

Red Hen Lab is an international distributed cooperative of researchers in multimodal communication. We are senior professors at major research universities, senior developers in technology corporations. We also include junior professors, postdoctoral students, graduate students, undergraduate students, and even a few advanced high school students. Red Hen develops code in Natural Language Processing, ASR, audio parsing, gesture analysis, media analysis, computer vision, and multimodal analysis.

Red Hen's multimodal communication research involves locating, identifying, and characterizing auditory and visual elements in videos and pictures. We may provide annotated clips or images and present the challenge of developing the machine learning tools to find additional instances in a much larger dataset. Some examples are gestures, eye movements, and tone of voice. We favor projects that combine more than one modality, but have a clear communicative function—an example would be floor-holding techniques. Once a feature has been successfully identified in our full dataset of several hundred thousand hours of news videos, cognitive linguists, communication scholars, and political scientists can use this information to study higher-level phenomena in language, culture, and politics and develop a better understanding of the full spectrum of human communication. Our dataset is recorded in a large number of languages, giving Red Hen a global perspective.

For GSoC 2021, we invite proposals from students for components for a unified multimodal processing pipeline, whose aim is to extract information from text, audio, and video, and to develop integrative cross-modal feature detection tasks. Red Hen Lab is directed jointly by Francis Steen (UCLA) and Mark Turner (Case Western Reserve University).

lightbulb_outline View ideas list

Technologies

  • python
  • machinelearning
  • nlp
  • computer vision
  • big data science

Topics

mail_outline Contact email

Red Hen Lab 2021 Projects

  • Yash Khasbage
    Anonymizing Audiovisual Data
    The Audiovisual data present with Red Hen is not shareable. The visual data clearly shows the speakers involved, and a person can be recognized with...
  • Swadesh Jana
    Create a Red Hen OpenDataset for gestures with performance baselines
    Red Hen OpenDataset contains annotated gesture videos from talk shows. The project requires to systematize the data for computer science researchers...
  • Lisardo Pérez Lugones
    Depicting of Graphical Communication Systems (GCS) in Aztec/Central Mexican manuscripts with Deep Learning: glyphic visual recognition and deciphering using Keras
    Among all the human writing communication systems and inspired by Google Arts & Culture project Fabricius, we propose the creation of a framework to...
  • Nickil Maveli
    Detecting Joint Meaning Construal by Language and Gesture
    The project aims to develop a prototype that is capable of meaning construction using multi-modal channels of communication. Specifically, for a...
  • Tarun Nagdeve
    Development of a Visual Recognition model for Aztec Hieroglyphs
    This project aims to Develop a Visual Recognition model for Aztec Hieroglyphs. Aztec language is pictographic and ideographic photo-writing. It has...
  • Yunfei Zhao
    Gesture temporal detection pipeline for news videos
    Gesture recognition becomes popular in recent years since it can play an essential role in non-verbal communication, emotion analysis as well as...
  • Shreyan Ganguly
    Machine detection of film edits
    A film can be fundamentally broken into innumerous shots, placed after one another. These shots are divided by cuts. Film cuts can be broadly divided...
  • Amr Maghraby
    Machine detection of film edits
    By using digital video, every day the number of people needs to edit and manipulate video content is to increase. This requires from us to have...
  • Nitesh Mahawar
    Multimodal TV Show Segmentation
    I will continue from last year’s work and improve the clustering algorithm to the in-production code and enhance the previous work. The main problem...
  • Mohamed Mokhtar
    Red Hen Rapid Annotator
    Continuing the work done on Rapid Annotator 2.0 by Vaibhav Gupta, There were some improvements and new functionalities can be added to the current...
  • ankiitgupta7
    Simulating Representational Communication in Vervet Monkeys
    Vervet monkeys (Cercopithecus aethiops) are said to give acoustically different alarm calls to different predators, evoking contrasting, seemingly...
  • Hannes
    Utilizing Speech-to-Speech Translation to Facilitate a Multilingual Text, Audio, and Video Message Board and Database
    We design a simple pipeline for using state-of-the-art speech-to-text, text-to-text, and text-to-speech to create a speech-to-speech translation...
close

2021