Latest YouTube Video

Wednesday, October 5, 2016

A Novel Representation of Neural Networks. (arXiv:1610.01549v1 [stat.ML])

Deep Neural Networks (DNNs) have become very popular for prediction in many areas. Their strength is in representation with a high number of parameters that are commonly learned via gradient descent or similar optimization methods. However, the representation is non-standardized, and the gradient calculation methods are often performed using component-based approaches that break parameters down into scalar units, instead of considering the parameters as whole entities. In this work, these problems are addressed. Standard notation is used to represent DNNs in a compact framework. Gradients of DNN loss functions are calculated directly over the inner product space on which the parameters are defined. This framework is general and is applied to two common network types: the Multilayer Perceptron and the Deep Autoencoder.



from cs.AI updates on arXiv.org http://ift.tt/2dT5WuO
via IFTTT

No comments: