Deep Learning relies heavily on a large number of linear operations. Data parallelism is a consequence of this property of Deep Learning.GPUs are very fast at computing these linear operations. Therefore, significant speedup can be achieved if this computation is delegated to the GPU. This project aims to provide a framework for training Variational Autoencoders. The plan is to extend the current Deep Auto Encoder module to be a generative framework that supports Convolutional and possibly Recurrent Encoder-Decoder architectures. The framework written will be compatible with GPUs

Organization

Student

SiddharthaRao Kamalakara

Mentors

  • Saurav Shekhar
  • Lorenzo Moneta
  • Kim Albertsson
  • Omar Zapata
close

2018