Red Hen Lab

Research on Multimodal Communication

Technologies
machine learning, opencv, high performance computing, multimodal analysis, audio processing
Topics
natural language processing, deep learning, multimedia, co-speech gesture, big data visualization
Research on Multimodal Communication

Red Hen Lab is a distributed consortium of researchers in multimodal communication, with participants all over the world. We are senior professors at major research universities, senior developers in technology corporations, and also junior professors, postdoctoral students, graduate students, undergraduate students, and even a few advanced high school students. Red Hen develops code in Natural Language Processing, audio parsing, computer vision, and joint multimodal analysis. For GSoC 2015, our focus was audio parsing. For GSoC 2016, our focus was multimodal machine learning. For GSoC 2017, we invite proposals from students for components for a unified multimodal processing pipeline, whose aim is to extract information from text, audio, and video, and to develop integrative cross-modal feature detection tasks. Red Hen Lab is directed jointly by Francis Steen (UCLA) and Mark Turner (Case Western Reserve University).

2017 Program

Successful Projects

Contributor
Prannoy Mupparaju
Mentor
Peter Uhrig, Mark Turner, Francis Steen
Organization
Red Hen Lab
Multilingual Corpus Pipeline
This project aims to build a pipeline for a searchable corpus on multiple languages. We will be using NewsScape data for the project and tools like...
Contributor
Nayeem Aquib
Mentor
Philipp Heinrich, Peter Uhrig, Francis Steen, Kai Chan
Organization
Red Hen Lab
Sentiment Analysis of Social Media Data
Whilst crime in general has been falling for decades, hate crime has gone in the other direction. Especially after the US election 2016, it has risen...
Contributor
donghun lee
Mentor
Tim Groeling, Francis Steen, Kai Chan
Organization
Red Hen Lab
Multimodal television show segmentation
I aim to build a general system that detects natural boundaries of TV shows. This task has long been under the realm of manual approach by skilled...
Contributor
skrish13
Mentor
Mehul Bhatt, Francis Steen, Cristóbal Pagán Cánovas, Jakob Suchan
Organization
Red Hen Lab
Multimodal Emotion Detection on Videos using CNN-RNN, 3D Convolutions and Audio features
This is a deep learning approach which uses both image and audio modality from the videos to detect emotion and characterize it. It uses a...
Contributor
ahaldar
Mentor
Shruti, Francis Steen
Organization
Red Hen Lab
Neural Network Models to Study Framing and Echo Chambers in News
An interesting study is to construct a model of the media representations of the world, considering features from social discourse such as crime,...
Contributor
Divesh Pandey
Mentor
Tomas Gonzalez Sanchez, Cristóbal Pagán Cánovas, Carlos Fernandez
Organization
Red Hen Lab
Audio Visual Speech Recognition System based on Deep Speech
Current Red Hen Lab’s Audio Pipeline can be extended to support speech recognition. This project proposes the development of a deep neural-net speech...
Contributor
Ganesh Srinivas
Mentor
Vera Tobin, Otto Santa Ana, mpac
Organization
Red Hen Lab
Learning Embeddings for Laughter Categorization
I propose to train a deep neural network to discriminate between various kinds of laughter (giggle, snicker, etc.) A convolutional neural network can...
Contributor
littleowen
Mentor
Mark Turner, Jacek Wozny
Organization
Red Hen Lab
Large-scale Speaker Recognition System for CNN News
This project aims to build a large-scale speaker recognition system for tagging speakers in CNN news recordings upon the existing Red Hen audio...
Contributor
Karolina Stosio
Mentor
Karan Singla
Organization
Red Hen Lab
Audio embedding space in a MultiTask architecture
Auditory stimuli like music, radio recordings, movie soundtracks or the regular speech are widely used in research. While it is easy for a human to...