Latest YouTube Video

Wednesday, January 18, 2017

On the Performance of Network Parallel Training in Artificial Neural Networks. (arXiv:1701.05130v1 [cs.AI])

Artificial Neural Networks (ANNs) have received increasing attention in recent years with applications that span a wide range of disciplines including vital domains such as medicine, network security and autonomous transportation. However, neural network architectures are becoming increasingly complex and with an increasing need to obtain real-time results from such models, it has become pivotal to use parallelization as a mechanism for speeding up network training and deployment. In this work we propose an implementation of Network Parallel Training through Cannon's Algorithm for matrix multiplication. We show that increasing the number of processes speeds up training until the point where process communication costs become prohibitive; this point varies by network complexity. We also show through empirical efficiency calculations that the speedup obtained is superlinear.



from cs.AI updates on arXiv.org http://ift.tt/2iTjwiA
via IFTTT

No comments: