Latest YouTube Video

Wednesday, April 13, 2016

Learning with Memory Embeddings. (arXiv:1511.07972v6 [cs.AI] UPDATED)

Embedding learning, a.k.a. representation learning, has been shown to be able to model large-scale semantic knowledge graphs. A key concept is a mapping of the knowledge graph to a tensor representation whose entries are predicted by models using latent representations of generalized entities. Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs. In recent publications the embedding models were extended to also consider temporal evolutions, temporal patterns and subsymbolic representations. In this paper we map embedding models, which were developed purely as solutions to technical problems for modelling temporal knowledge graphs, to various cognitive memory functions, in particular to semantic and concept memory, episodic memory, sensory memory, short-term memory, and working memory. We discuss learning, query answering, the path from sensory input to semantic decoding, and relationships between episodic memory and semantic memory. We introduce a number of hypotheses on human memory that can be derived from the developed mathematical models. There are three main hypotheses. The first one is that semantic memory is described as triples and that episodic memory is described as triples in time. A second main hypothesis is that generalized entities have unique latent representations which are shared across memory functions and that are the basis for prediction, decision support and other functionalities executed by working memory. A third main hypothesis is that the latent representation for a time $t$, which summarizes all sensory information available at time $t$, is the basis for episodic memory. The proposed model includes both a recall of previous memories and the mental imagery of future events and sensory impressions.



from cs.AI updates on arXiv.org http://ift.tt/1NdF4iH
via IFTTT

No comments: