The free lunch is over and with the ever increasing amounts of data to be analyzed and processed by learning algorithms, we need to find methods for harnessing the processing power available in the form of increasing number of threads in modern CPUs. Optimizers are integral parts of modern machine learning algorithms, forming the backbone of the training infrastructure. Recent research in methods of optimization has revealed a few interesting algorithms for scaling the performance of sequential optimizers over multiple threads. This project proposal looks over the implementation of two primary approaches to parallelizing stochastic optimization algorithms in mlpack.

Organization

Student

Shikhar Bhardwaj

Mentors

  • Yannis Mentekidis
close

2017