There is intense interest in applying machine learning to problems of causal inference in healthcare, economics, education, and other fields. In particular, individual-level causal inference has applications such as precision medicine and personalized advertising. We give a new theoretical analysis and family of algorithms for estimating individual treatment effect (ITE) from observational data. The algorithm itself learns a "balanced" representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distance. Experiments on real and simulated data show the new algorithms match or outperform state-of-the-art methods.
from cs.AI updates on arXiv.org http://ift.tt/1UwrExD
via IFTTT
No comments:
Post a Comment