Latest YouTube Video

Thursday, September 1, 2016

A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction. (arXiv:1512.06293v2 [cs.IT] UPDATED)

Deep convolutional neural networks have led to breakthrough results in numerous practical machine learning tasks such as classification of images in the ImageNet data set, control-policy-learning to play Atari games or the board game Go, and image captioning. Many of these applications first perform feature extraction and then feed the results thereof into a trainable classifier. The mathematical analysis of deep convolutional neural networks for feature extraction was initiated by Mallat, 2012. Specifically, Mallat considered so-called scattering networks based on a wavelet transform followed by the modulus non-linearity in each network layer, and proved translation invariance (asymptotically in the wavelet scale parameter) and deformation stability of the corresponding feature extractor. This paper complements Mallat's results by developing a theory of deep convolutional neural networks for feature extraction encompassing general convolutional transforms, or in more technical parlance, general semi-discrete frames (including Weyl-Heisenberg, curvelet, shearlet, ridgelet, and wavelet frames), general Lipschitz-continuous non-linearities (e.g., rectified linear units, shifted logistic sigmoids, hyperbolic tangents, and modulus functions), and general Lipschitz-continuous pooling operators emulating sub-sampling and averaging. In addition, all of these elements can be different in different network layers. For the resulting feature extractor we prove a translation invariance result which is of vertical nature in the sense of the network depth determining the amount of invariance, and we establish deformation sensitivity bounds that apply to signal classes with inherent deformation insensitivity such as, e.g., band-limited functions.



from cs.AI updates on arXiv.org http://ift.tt/1Jq815O
via IFTTT

No comments: