We extend graph-based identification methods for linear models by allowing background knowledge in the form of externally evaluated parameters. Such information could be obtained, for example, from a previously conducted randomized experiment, from substantive understanding of the domain, or even from another identification technique. To incorporate such information systematically, we propose the addition of auxiliary variables to the model, which are constructed so that certain paths will be conveniently cancelled. This cancellation allows the auxiliary variables to help conventional methods of identification (e.g., single-door criterion, instrumental variables, half-trek criterion) and model testing (e.g., d-separation, over-identification). Moreover, by iteratively alternating steps of identification and adding auxiliary variables, we can improve the power of existing identification and model testing methods, even without additional knowledge. We operationalize this general approach for instrumental sets (a generalization of instrumental variables) and show that the resulting procedure subsumes the most general identification method for linear systems known to date. We further discuss the application of this new operation in the tasks of model testing and z-identification.
from cs.AI updates on arXiv.org http://ift.tt/1OD10FS
via IFTTT
No comments:
Post a Comment