Variational Autoencoders on GPUs for particle physics applications
- Mentors
- Saurav Shekhar, Lorenzo Moneta, Kim Albertsson, Omar Zapata
- Organization
- CERN-HSF
Deep Learning relies heavily on a large number of linear operations. Data parallelism is a consequence of this property of Deep Learning.GPUs are very fast at computing these linear operations. Therefore, significant speedup can be achieved if this computation is delegated to the GPU. This project aims to provide a framework for training Variational Autoencoders. The plan is to extend the current Deep Auto Encoder module to be a generative framework that supports Convolutional and possibly Recurrent Encoder-Decoder architectures. The framework written will be compatible with GPUs