Red Hen Lab
Research on Multimodal Communication
Red Hen Lab is an international distributed cooperative of researchers in multimodal communication. We are senior professors at major research universities, senior developers in technology corporations. We also include junior professors, postdoctoral students, graduate students, undergraduate students, and even a few advanced high school students. Red Hen develops code in Natural Language Processing, ASR, audio parsing, gesture analysis, media analysis, computer vision, and multimodal analysis.
Red Hen's multimodal communication research involves locating, identifying, and characterizing auditory and visual elements in videos and pictures. We may provide annotated clips or images and present the challenge of developing the machine learning tools to find additional instances in a much larger dataset. Some examples are gestures, eye movements, and tone of voice. We favor projects that combine more than one modality, but have a clear communicative function—an example would be floor-holding techniques. Once a feature has been successfully identified in our full dataset of several hundred thousand hours of news videos, cognitive linguists, communication scholars, and political scientists can use this information to study higher-level phenomena in language, culture, and politics and develop a better understanding of the full spectrum of human communication. Our dataset is recorded in a large number of languages, giving Red Hen a global perspective.
For GSoC 2021, we invite proposals from students for components for a unified multimodal processing pipeline, whose aim is to extract information from text, audio, and video, and to develop integrative cross-modal feature detection tasks. Red Hen Lab is directed jointly by Francis Steen (UCLA) and Mark Turner (Case Western Reserve University).