Red Hen Lab: Research on Multimodal Communication

Red Hen Lab is a distributed consortium of researchers in multimodal communication, with participants from all over the world. We are senior professors at major research universities, senior developers in technology corporations, and also junior professors, postdoctoral students, graduate students, undergraduate students, and even a few advanced high school students. Red Hen develops code in Natural Language Processing, audio parsing, computer vision, and joint multimodal analysis. Last summer our focus was audio parsing. The focus for 2016 is multimodal machine learning. Red Hen Lab is directed jointly by Francis Steen (UCLA) and Mark Turner (Case Western Reserve University).

lightbulb_outline View ideas list


  • high performance computing
  • machine learning
  • opencv
  • audio procesing
  • multimodal analysis


  • Science and Medicine
  • natural language processing
  • co-speech gesture
  • big data visualization
  • deep learning
  • multimedia
email Mailing list
mail_outline Contact email

Red Hen Lab 2016 Projects

  • Xi-Jin Zhang (mfs6174)
    Computer Vision and Machine Leaning Applications on Artwork Images
    The proposal is inspired by the idea G on RedHen's GSoC 2016 idea page. The main purpose is to develop models and code helping domain experts to...
  • Abhinav Mehta
    Gesture Recognition Using Machine Learning
    Gesture Recognition using template matching, motion history image and machine learning. The project is basically divided into 3 phases involving...
  • mozin
    Gesture recognition using multimodal deep learning
    Use video and text data of the speaker to recognise gesture of the speaker on TV using LSTMs.
  • Soumitra Agarwal
    Gestures, Machine learning and other things
    The proposal aims to identify elements of co-speech gestures in a massive data of television news. The steps will include building a flawed data-set,...
  • Aswin kumar J
    To construct Bootstrapping Human Motion Data for Gesture Analysis
    The Project aims at detecting the Human gesture with the help of classifiers.The project consists of 1) Database, has the Segmented Frames of...