Presenter:  Michael Jones
Presentation type:  Symposium
Presentation date/time:  7/28  9:25-9:50
 
Bridging semantic representation and associative memory theory
 
Michael Jones, Indiana University
 
Contemporary models of lexical representation have a major advantage over traditional models in that they are able to learn representations from statistical information in the environment rather than relying on hand-coded representations based on intuition. However, these methods are still fundamentally based on algorithms from document retrieval (e.g., Salton & McGill, 1983). In this talk, I will outline the BEAGLE model (Jones & Mewhort, 2007, Psyc Rev), an attempt to build high-dimensional semantic representations for words using mechanisms adapted from associative memory theory (cf. Murdock, 1982). The model represents contextual co-occurrence and word order information in a single holographic vector per word using superposition and convolution mechanisms that have proven effective at modeling human learning and memory in a variety of other domains. The additional word order information gives the model a higher fidelity lexical representation than co-occurrence alone, which is beneficial in several tasks. Further, the learning mechanism can be inverted to retrieve sequential dependency information from the semantic representations. The model will be trained on text corpora and the similarity of the resulting representations will be compared to human data in tasks involving semantic judgments, priming, comprehension, and the time course of semantic acquisition.