Several 'edge-discovery' applications over graph-based data models are known to have worst-case quadratic complexity, even if the discovered edges are sparse. One example is the generic link discovery problem between two graphs, which has invited research interest in several communities. Specific versions of this problem include link prediction in social networks, ontology alignment between metadata-rich RDF data, approximate joins, and entity resolution between instance-rich data. As large datasets continue to proliferate, reducing quadratic complexity to make the task practical is an important research problem. Within the entity resolution community, the problem is commonly referred to as blocking. A particular class of learnable blocking schemes is known as Disjunctive Normal Form (DNF) blocking schemes, and has emerged as state-of-the art for homogeneous (i.e. same-schema) tabular data. Despite the promise of these schemes, a formalism or learning framework has not been developed for them when input data instances are generic, attributed graphs possessing both node and edge heterogeneity. With such a development, the complexity-reducing scope of DNF schemes becomes applicable to a variety of problems, including entity resolution and type alignment between heterogeneous RDF graphs, and link prediction in networks represented as attributed graphs. This paper presents a graph-theoretic formalism for DNF schemes, and investigates their learnability in an optimization framework. Experimentally, the DNF schemes learned on pairs of heterogeneous RDF graphs are demonstrated to achieve high complexity-reductions (98.25% across ten RDF test cases) at little cost to coverage, and with high reliability (<2.5% standard deviation). Finally, one extant class of RDF blocking schemes is shown to be a special case of DNF schemes.
from cs.AI updates on arXiv.org http://ift.tt/24oLh5k
via IFTTT
No comments:
Post a Comment