Research on Multimodal Communication

Red Hen Lab is a distributed consortium of researchers in multimodal communication, with participants all over the world. We are senior professors at major research universities, senior developers in technology corporations, and also junior professors, postdoctoral students, graduate students, undergraduate students, and even a few advanced high school students. Red Hen develops code in Natural Language Processing, audio parsing, computer vision, and joint multimodal analysis. For GSoC 2015, our focus was audio parsing. For GSoC 2016, our focus was multimodal machine learning. For GSoC 2017, we invite proposals from students for components for a unified multimodal processing pipeline, whose aim is to extract information from text, audio, and video, and to develop integrative cross-modal feature detection tasks. Red Hen Lab is directed jointly by Francis Steen (UCLA) and Mark Turner (Case Western Reserve University).

lightbulb_outline View ideas list


  • high performance computing
  • machine learning
  • opencv
  • audio processing
  • multimodal analysis


  • Science and Medicine
  • natural language processing
  • co-speech gesture
  • big data visualization
  • deep learning
  • multimedia
mail_outline Contact email

Red Hen Lab 2017 Projects