Latest YouTube Video

Monday, July 18, 2016

Piecewise convexity of artificial neural networks. (arXiv:1607.04917v1 [cs.LG])

Although artificial neural networks have shown great promise in applications ranging from computer vision to speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees concerning networks with continuous piecewise affine activation functions, which have in recent years become the norm. We prove three main results. Firstly, that the network is piecewise convex as a function of the input data. Secondly, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Finally, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. Accordingly, we show that any point to which gradient descent converges is a local minimum of some piece. Thus gradient descent converges to non-minima only at the boundaries of pieces. These results might offer some insights into the effectiveness of gradient descent methods in optimizing this class of networks.



from cs.AI updates on arXiv.org http://ift.tt/2aoPPjd
via IFTTT

No comments: