Latest YouTube Video

Tuesday, May 31, 2016

Controlling Exploration Improves Training for Deep Neural Networks. (arXiv:1605.09593v1 [cs.LG])

Stochastic optimization methods are widely used for training of deep neural networks. However, it is still a challenging research problem to achieve effective training by using stochastic optimization methods. This is due to the difficulties in finding good parameters on a loss function that have many saddle points. In this paper, we propose a stochastic optimization method called STDProp for effective training of deep neural networks. Its key idea is to effectively explore parameters on a complex surface of a loss function. We additionally develop momentum version of STDProp. While our approaches are easy to implement with high memory efficiency, it is more effective than other practical stochastic optimization methods for deep neural networks.



from cs.AI updates on arXiv.org http://ift.tt/1XdJSLb
via IFTTT

No comments: