Red Hen Lab
Red Hen Lab: Research on Multimodal Communication
Red Hen Lab is a distributed consortium of researchers in multimodal communication, with participants from all over the world. We are senior professors at major research universities, senior developers in technology corporations, and also junior professors, postdoctoral students, graduate students, undergraduate students, and even a few advanced high school students. Red Hen develops code in Natural Language Processing, audio parsing, computer vision, and joint multimodal analysis. Last summer our focus was audio parsing. The focus for 2016 is multimodal machine learning. Red Hen Lab is directed jointly by Francis Steen (UCLA) and Mark Turner (Case Western Reserve University).
Red Hen Lab 2016 Projects
Xi-Jin Zhang (mfs6174)Computer Vision and Machine Leaning Applications on Artwork ImagesThe proposal is inspired by the idea G on RedHen's GSoC 2016 idea page. The main purpose is to develop models and code helping domain experts to...
Abhinav MehtaGesture Recognition Using Machine LearningGesture Recognition using template matching, motion history image and machine learning. The project is basically divided into 3 phases involving...
mozinGesture recognition using multimodal deep learningUse video and text data of the speaker to recognise gesture of the speaker on TV using LSTMs.
Soumitra AgarwalGestures, Machine learning and other thingsThe proposal aims to identify elements of co-speech gestures in a massive data of television news. The steps will include building a flawed data-set,...
Aswin kumar JTo construct Bootstrapping Human Motion Data for Gesture AnalysisThe Project aims at detecting the Human gesture with the help of classifiers.The project consists of 1) Database, has the Segmented Frames of...