Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. In this paper we map embedding models to various cognitive memory functions. We postulate several hypotheses. A first hypothesis that arises out off this work is that mutual information exchange can be achieved by a sharing or a coupling of distributed latent representations of entities across different memory functions (unique-representation hypothesis). Secondly, the sequential sampling hypothesis states that retrieval and question answering is achieved by a sequentially sampling of latent representations of entities. Thirdly, the functional memory hypothesis states that memory operations are implemented as functions on the latent representations. A fourth hypothesis is that a latent representation for time t, which captures all events that are happening at time t, is a bridge between sensory input and episodic memory (temporal-representation hypothesis). A fifth hypothesis is that the decoding of sensory input is constrained in that it must lead to a semantic explanation for the sensory data (emerging semantics hypothesis). Sixth, the attention hypothesis states that only sensory information that is novel is stored in episodic memory. Finally, the semantic-attractor learning hypothesis can be the basis for learning in cognitive memory systems.
from cs.AI updates on arXiv.org http://ift.tt/1NdF4iH
via IFTTT
No comments:
Post a Comment