Latest YouTube Video

Thursday, September 15, 2016

Learning from networked examples. (arXiv:1405.2600v2 [cs.AI] UPDATED)

Many machine learning algorithms are based on the assumption that training examples are drawn identically and independently. However, this assumption does not hold anymore when learning from a networked sample because two or more training examples may share some common objects, and hence share the features of these shared objects. We first show that the classic approach of ignoring this problem potentially can have a disastrous effect on the accuracy of statistics, and then consider alternatives. One of these is to only use independent examples, discarding other information. However, this is clearly suboptimal. We analyze sample error bounds in a networked setting, providing both improved and new results. Next, we propose an efficient weighting method which achieves a better sample error bound than those of previous methods. Our approach is based on novel concentration inequalities for networked variables.



from cs.AI updates on arXiv.org http://ift.tt/1ggs7qe
via IFTTT

No comments: