Development of the Deep Learning Optimization Algorithms in TMVA.
- Mentors
- Vladimir Ilievski, Stefan Wunsch, Sergei Gleyzer
- Organization
- CERN-HSF
The existing TMVA submodule has always used gradient descent to update the parameters and minimize the cost of the neural networks. More advanced optimization methods can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
The project aims to implement various Optimization Modules ( Momentum-based, Nesterov accelerated momentum, Adagrad, RMSProp, Adadelta, Adamax, Adam, Nadam, AMSGrad etc ) in Machine learning.