Presenter:  Chris Baker
Presentation type:  Symposium
Presentation date/time:  7/27  10:55-11:20
 
Action Understanding as Inverse Probabilistic Planning
 
Chris Baker, MIT
Joshua Tenenbaum, MIT
 
Human social interaction depends on the ability to understand other people's actions in terms of the mental states that produce behavior. Much like visual perception, action understanding proceeds unconsciously and effortlessly but is the result of sophisticated computations designed to solve a highly under-constrained inverse problem. While vision is a kind of "inverse graphics", action understanding is a kind of "inverse planning". The goal is to recover the goals and beliefs that lead an agent to act in some observed way. The core assumption is the principle of rationality: a rational agent tends to choose actions that satisfy its goals most efficiently given its model of the environment. Observing an agents actions, we can then work backwards to infer its goals or its environment model (or perhaps both). Evidence from behavioral experiments suggests that action understanding in adults and even preverbal infants is qualitatively consistent with this "inverse planning" view. Our aim here is to develop a mathematically precise of this account, to assess its quantitative predictive power for human judgments about the goals of actions, and if possible to distinguish it from simpler heuristic approaches. Our models take a Bayesian approach to inverse probabilistic planning in partially observable Markov decision problems (POMDPs). This inverse planning framework includes many specific models that differ in their representations of agents mental states and actions -- the priors needed to solve inverse-planning problems. Our experiments were designed both to test the general framework and to probe the nature of peoples representations of agents goal structures. One set of experiments examined subjects' online and retrospective goal inferences from incomplete trajectories, while another set of experiments investigated subjects' predictions of agents' future actions, given observations of previous actions. We identify a class of inverse planning model that correlate very highly with people's judgments, and that crucially allow goals to have complex temporal dynamics and componential structure.



Presenter:  Jonathan Barzilai
Presentation type:  Talk
Presentation date/time:  7/27  9:00-9:25
 
The Applicability of Addition and Multiplication to Scale Values
 
Jonathan Barzilai, Dalhousie University
 
The conditions for applicability of the operations of addition and multiplication to scale values have not been identified in the literature and are not satisfied by the models of the classical theory of measurement whether the underlying variables are physical or psychological (including the models of "conjoint measurement"). As a result, addition and multiplication, e.g. expressions of the form m(3)=m(1)+m(2) and m(2)=5m(1) for a given mass scale, are not applicable to scale values that are based on these models. The implications of a new theory of measurement which addresses these problems will be reviewed. The main elements of this theory include a new scale classification, the Principle of Reflection, and homogeneity considerations. In particular, it will be demonstrated that utility theory cannot serve as a foundation for decision theory, game theory, or economics.



Presenter:  William Batchelder
Presentation type:  Talk
Presentation date/time:  7/27  15:10-3:35
 
Modeling Free Recall Order Data
 
William Batchelder, University of California, Irvine
William Shankle, University of California, Irvine
Jared Smith, University of California, Irvine
 
We analyze data from over 20,000 subjects, each of whom participated in an identical ten item free recall experiment. The experiment involved three study-test trials followed by a delayed test trial, where presentation order was identical over study trials. The subjects ranged from healthy elderly to elderly with early stage dementia, and each subject's data included covariates on gender, age, and education. Some subjects had complete neurological classifications. We analyzed the data on each item at four levels: 1. percent correct over the four test trials; 2. a ten by four bit- map of correct and incorrect recalls; 3. four-tupple frequencies of the sixteen possible successful and unsuccessful recalls of each item across the four test trials; and 4. actual item recall orders on test trials. A correspondence analysis of the bitmap data provided accurate classification of those subjects with mild dementia. A Markov model of the four tupple frequencies was analyzed with various Bayesian hierarchical versions that allowed both subject and item inhomogeneity. We provided a model of recall order based on memory strength, and we show using odds ratios that departures from the model suggest that there is a competing tendency to recall non-recency items in the presentation order and recency items in the reverse order. While we are yet to provide a satisfactory model of the recall order data, the huge size of the data based provides a test bed for future free recall models.



Presenter:  J.Neil Bearden
Presentation type:  Symposium
Presentation date/time:  7/27  9:00-9:25
 
Searching for the Party
 
J.Neil Bearden, INSEAD
Terry Connolly, University of Arizona
Ryan Murphy, Columbia University
 
We study the problem of a decision maker (DM) trying to reach a party whose location is uncertain. She has a prior probability distribution over the potential locations of the party, her search path is constrained, and she has the objective of reaching the party as quickly as possible. We refer to the problem as the Party Search Problem (PSP). We first show how the PSP can be formulated as a partially observable Markov decision process (POMDP). Next, we show that certain features of the problem make it easily solvable by conventional dynamic programming methods. Finally, we present data from a laboratory experiment in which financially motivated subjects played the PSP. Our results show that subjects display significant "navigational waste." That is, they take travel paths that significantly lengthen their search time, compared to the optimal travel plans. These inefficiencies arise from several behavioral biases. For instance, they tend make "u-turns" too soon during their search. This work brings together ideas from operations research, computer science, and experimental psychology.



Presenter:  Michael Birnbaum
Presentation type:  Talk
Presentation date/time:  7/28  10:30-10:55
 
Transitivity of Preference in Individuals
 
Michael Birnbaum, California State University, Fullerton
Jeffrey Bahra, California State University, Fullerton
 
In a classic paper, Tversky (1969) reported that some participants violated transitivity systematically when choosing between specially constructed gambles. These violations were interpreted as the result of a lexicographic semiorder model for choice. Recently, a variant of the lexicographic semiorder known as the priority heuristic has been proposed as a descriptive model of choices between risky gambles by Brandstaetter, Gigerenzer, and Hertwig (2006). This paper will present empirical tests of transitivity in fifty individuals using three designs intended to test for systematic violations of transitivity predicted by the priority heuristic and related lexicographic models. Results show that the data of the vast majority of participants could be described with transitive orders, and that a few participants switched between different transitive orders during the course of the experiment. No individual was observed whose choices were mostly consistent with the priority heuristic (PH), nor were the data of any person convincingly intransitive. The majority of individuals showed systematic violations of cumulative prospect theory (CPT) and expected utility (EU) such as systematic violations of stochastic dominance. A special case of the transfer of attention exchange (TAX) model provided a better fit to individual data than did PH, CPT or EU for the majority of individuals.



Presenter:  Stephen Broomell
Presentation type:  Talk
Presentation date/time:  7/27  9:25-9:50
 
Decomposing Inter-Judge Correlation
 
Stephen Broomell, University of Illinois at Urbana-Champaign
David Budescu, University of Illinois at Urbana-Champaign
 
Decision Makers seek advice from multiple experts in order to increase diagnostic ability and improve the quality of their decisions. The effects of such aggregations have been well studied (Ariely et al. 2001; Clemen & Winkler, 1985; Hogarth, 1978; Johnson, Budescu & Wallsten, 2001; Wallsten & Diederich, 2001). Several measures of quality of performance (precision, discrimination, and validity) increase with each additional judge, but at a diminishing rate that is a function of inter-judge correlation. These findings stress the importance of inter-judge dependence.Using a set of reasonable assumptions we derive a model to predict the magnitude of inter-judge correlation as a function of 5 underlying factors. The first two factors are cue similarity, ρc, and the number of cues, N, which describe assumptions about nature. The final three factors ρw, σ2w, and δ describe assumptions about the judges (similarity in training, cue use, and accuracy respectively). We found that the factors ρc, N, and ρw, increase inter-judge correlation, while σ2w and δ decrease it. This model allows us to study the relative importance and interrelation of these five factors with respect to inter-judge correlation. Interrelation between factors impedes complete dominance but generally we found ρc has more influence than ρw and σ2w. Using our model in conjunction with existing models we can also address a variety of practical questions. For example, our results indicate that additional judges increase efficacy at a greater rate than additional cues (cues can sometimes slightly decrease efficacy).



Presenter:  Scott Brown
Presentation type:  Talk
Presentation date/time:  7/26  14:05-2:30
 
Simpler still: The simplest complete model of choice RT so far
 
Scott Brown, University of Newcastle
Andrew Heathcote, University of Newcastle
 
Recently, there has been an effort to simplify the theory and application of choice RT models. We present the simplest complete model so far - a set of linear, independent, ballistic accumulators. The "linear-BA" model accounts for RT distributions, the speed-accuracy tradeoff, and changes in RT and accuracy with decision difficulty. This model is so simple that it's distributions can be derived analytically, and these derivations apply to choices between any number of alternatives. We demonstrate that the model can fit two-choice data about as well as the current field leaders, and can also fit multi-choice alternative data that some other models cannot.



Presenter:  Cara Buck
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
Target priming effects in the Eriksen flanker task
 
Cara Buck, University of California, San Diego
Eddy Davelaar, Birkbeck, University of London, London
David Huber, University of California, San Diego
 
In the Eriksen flanker paradigm, peripheral flankers help or harm performance depending on their congruency with a central target. It has been observed that immediate preview of the flankers reduces the classic flanker effect. In light of priming experiments that reveal perceptual discounting, we investigated the contribution of target priming to the observed flanker preview effect. Three experiments manipulated the presence/absence of flankers in the response display as well as the duration, location, identity, and response characteristics of prime displays that appeared immediately prior to the response display. Participants made consonant-vowel judgments to target letters, allowing separate measurement of identity and response priming. In every experiment, there was remarkable similarity between presence/absence of flankers, which suggests that the flanker preview effect is largely due to priming of the target letter. Using a dynamic neural network model with transient habituation we accounted for both identity and response effects at the different prime durations.



Presenter:  Jerome Busemeyer
Presentation type:  Talk
Presentation date/time:  7/26  15:35-4:00
 
Quantum Information Processing Explanation for Interactions between Inferences and Decisions
 
Jerome Busemeyer, Indiana University
Zheng Wang, Ohio State University
 
Markov and quantum information processing models are compared with respect to their capability of explaining two different puzzling findings from empirical research on human inference and decision making. Both findings involve a task that requires making an inference about one of two possible uncertain states, followed by decision about two possible courses of action. Two conditions are compared: under one condition, the decisions are obtained after discovering or measuring the uncertain state; under another condition, choices are obtained before resolving the uncertainty so that the state remains unknown or unmeasured. Systematic departures from the Markov model are observed, and these deviations are explained as interference effects using the quantum model.



Presenter:  Carter Butts
Presentation type:  Talk
Presentation date/time:  7/26  13:15-1:40
 
Likelihood-based Inference for Cycle Structure Bias in Cognitive Models of Social Interaction
 
Carter Butts, University of California, Irvine
 
Discrete exponential family models for random graphs (ERGs) are increasingly popular tools for the analysis of discrete relational data. ERGs allow for the parameterization of complex dependence among edges within a likelihood-based framework, and are often used to model local influences on global structure. This paper presents a family of cycle statistics, which allow for the modeling of long-range dependence within ERGs. These statistics are shown to arise from a family of partial conditional dependence assumptions based on an extended form of reciprocity, here called reciprocal path dependence. Algorithms for computing cycle statistic changescores and the cycle census are provided, as are analytical expressions for the first and approximate second moments of the cycle census under a Bernoulli null model. One important application of the above model family arises in the context of subjects' subjective evaluations of social interaction within their local environment (sometimes called "cognitive social structures"). Balance theory, in particular, posits a tendency towards dyadic and triadic closure in positive relations. Implications of balance theory for the properties of long-cycle structure have been derived by Harary and others, but empirical evaluation has been hindered by the lack of statistical models for cycles of length greater than 3. We here use Markov chain Monte Carlo methods to fit ERG models for biases in cycle structure formation to two sets of 21 cognitive social structures from managers in a high-tech manufacturing firm. Implications of the estimated structural parameters for modeling of cognitive social structures are discussed.



Presenter:  Daniel Cavagnaro
Presentation type:  Talk
Presentation date/time:  7/27  9:00-9:25
 
Projection of a Medium
 
Daniel Cavagnaro, UCI
 
Learning spaces, partial cubes, and preference orderings are just a few of the many structures that can be captured by a 'medium,' a set of transformations on a possibly infnite set of states, constrained by four strong axioms. In this paper, we introduce a method for summarizing an arbitrary medium by gathering its states into equivalence classes and treating each equivalence class as a state in a new structure. When the new structure is also a medium, it can be characterized as a projection of the original medium. We show that any subset of tokens from an arbitrary medium generates a projection, and that each state in the projection determines a submedium. Potential applications include efficiently summarizing learning spaces for storage in computer memory and lumping states in a stochastic model of preference evolution.



Presenter:  Michael Lamport Commons
Presentation type:  Talk
Presentation date/time:  7/28  9:00-9:25
 
Comparing Rasch Scaled Stage Scores of Items from Five Instruments to Their Hierarchical Complexity and to Each Other: One Scale or Many?
 
Michael Lamport Commons, Harvard Medical School
Carrie Melissa Ost, Dare Institute
Ean Stuart Bett, Harvard University
Jose Ferreira Alves, University of Minho
Helena Marchand, University of Lisboa
 
This study provides empirical evidence for the Model of Hierarchical Complexity's ability to explain one major dimension of difficulty. That dimension is the hierarchical complexity of items in an instrument. The five instruments studied were: "Jesus' sayings", "Anti death penalty", "Helper-person problem", "To not report incest," and "To report incest." Each instrument consisted of five sets of items. Each item reflected one of five orders of hierarchical complexity, as had been done in previous work (Commons et al., 2006). Participants rated quality of the items on a 1 to 6 scale. In a factor analysis of all the tests together, most of the tests loaded highly on the first factor. This supports that stage of quite different tasks is a single factor. A Rasch analysis of performance of the 207 participants on each instrument was conducted. The relationship between hierarchical complexity of items and their Rasch scaled scores for each test was Jesus r3) = .828, Anti Death Penalty r(3) = .921, Helper-Person r(3) = . 990, Not to Report Incest r(3) = -.838, Pro Report r(3) = -.916. A Rasch analysis across all items for all instruments found that the items all fit on a single scale but sequentiality was not very good across domains. The factor analysis and regressions of Rasch stage scores on hierarchical complexity of items supported that test items were measuring stage thereby reflecting order of hierarchical complexity of task, providing support the Model of Hierarchical Complexity as one good predictor of task difficulty.



Presenter:  William Cook
Presentation type:  Symposium
Presentation date/time:  7/27  13:15-2:05
 
Traveling Salesman Problem
 
William Cook, Georgia Tech
 
Although the complexity of the traveling salesman problem is still unknown, for over 50 years its study has led the way to improved solution methods in many areas of mathematical optimization. We will discuss the history of the TSP and examine the role it has played in modern computational mathematics. We will also present a survey of general techniques used in the solution the TSP and other problems in discrete mathematics.



Presenter:  Eddy Davelaar
Presentation type:  Talk
Presentation date/time:  7/27  9:50-10:15
 
Extending the conflict-monitoring hypothesis
 
Eddy Davelaar, Birkbeck, University of London
 
A neurocomputational model is extended to address relevant data in a version of the Eriksen flanker task that allows investigation of stimulus- and response-conflict. The conflict-monitoring framework by Botvinick and colleagues proposes that response-conflict is higher in incongruent conditions compared to congruent or neutral conditions and that increases in conflict lead to increased control on subsequent trials. A computational study is presented in which the conflict-signal is (a) computed at every level of processing (response, stimulus) and is (b) used to modulate the input in the same trial. Results show that the models capture (1) the profile of distributional plots seen in the behavioral literature and (2) the patterns of hemodynamic responses seen in the neuroimaging literature. Suggestions for a combining neuroimaging and behavioral analyses are discussed.



Presenter:  Clintin Davis-Stober
Presentation type:  Talk
Presentation date/time:  7/28  11:20-11:45
 
Ternary Choice Data and Order Polytopes
 
Clintin Davis-Stober, University of Illinois Urbana-Champaign
Michel Regenwetter, University of Illinois Urbana-Champaign
 
We discuss conditions under which binary and ternary choice probabilities are induced by various kinds of binary relations, such as: weak, partial, semi, and interval orders. These order conditions take the form of a family of convex polytopes when represented in the appropriate vector space. We illustrate these conditions with applications to empirical choice data. We also present results pertaining to the identification of these order polytopes, i.e., the enumeration of their facet-defining inequalities.



Presenter:  Nando De Freitas
Presentation type:  Symposium
Presentation date/time:  7/26  9:00-9:50
 
Modern Monte Carlo Methods
 
Nando De Freitas, UBC
 
In this talk I will introduce modern Monte Carlo methods, including state-of-the-art sequential Monte Carlo (SMC) and trans-dimensional Markov chain Monte Carlo (MCMC). After laying out the foundation, I will show how these flexible techniques are ideally suited for carrying out computation in sophisticated probabilistic models of cognition. In particular, I will show how they can be used to learn models with time-varying properties, unknown number of variables, and (possibly unknown) complex relational and hierarchical structures. I will also show how these methods can be used to attack problems in stochastic decision making, such as active learning, experimental design, optimal control and sequential Markov decision processes.



Presenter:  Lawrence DeCarlo
Presentation type:  Talk
Presentation date/time:  7/26  9:00-9:25
 
On Some Mixture SDT Models for Associative-Recognition
 
Lawrence DeCarlo, Teachers College, Columbia University
 
Participants in associative-recognition tasks are shown word pairs to study, and later, in a test, are shown intact or re-arranged word pairs. Mixture SDT models for associative-recognition tasks can be motivated by considering the effects of attention on each trial, as has previously been done for recognition memory, source recognition, and the mirror effect. It is shown that a mixture SDT model proposed for source recognition tasks (DeCarlo, 2003) can be applied to associative-recognition tasks. A unique aspect of the associative-recognition task is that re-arranged word pairs consist of two words, either or both of which might provide some associative information - even if one doesn't remember the word pair (or the pair is not familiar), one may feel confident that one of the words was not paired with the other. Thus, in the model presented here, re-arranged word pairs are viewed as providing associative information. Analysis of recent data suggests that there is indeed associative strength for re-arranged word pairs, but it appears to be smaller than that for intact pairs. The use of new-word pairs, or lures, also raises some interesting questions. For example, new word pairs in associative-recognition tasks might provide information about association: if one is confident that a word is new, then one can also be confident that it was not in a previously presented word pair. Models that incorporate lures and other extensions are considered.



Presenter:  Simon Dennis
Presentation type:  Symposium
Presentation date/time:  7/28  10:30-10:55
 
A syntagmatic approach to syntactic representation: Extracting dependency and constituency information from corpora
 
Simon Dennis, University of Adelaide
 
In the linguistic literature, a distinction can be drawn between dependency grammars, which assume that syntax involves capturing the relationships between individual words in a sentence, and constituency grammars, which assume that syntax involves capturing the (hierarchical) relationships between contiguous sequences of words in a sentence. By creating the matrix of syntagmatic relationships between words in a sentence (i.e. which words follow which words) both dependency and constituency units are exposed. Dependency units correspond to the rows and columns corresponding to an individual word, and constituency units correspond to contiguous triangles. Compiling the matrices corresponding to the sentences of a corpus creates a third order tensor to which machine learning algorithms can be applied to extract these syntactic units. In this talk, I will contrast the units extracted by nonnegative matrix factorization and three versions of sparse independent components analysis. The units produced will be compared against those proposed by standard phrase structure grammar (Radford, 1988) and link grammar (Sleator & Temperly, 1993) as examples of constituency and dependency grammars, respectively.



Presenter:  Robert Dougherty
Presentation type:  Symposium
Presentation date/time:  7/26  13:40-2:05
 
Estimating Neural Response Functions with fMRI
 
Robert Dougherty, Stanford University
 
Functional MRI can be used to measure neural responses to systematic stimulus manipulations. Combined with functional localizer techniques such as retinotopy, these neural response functions can be accurately localized and simultaneously estimated in several brain regions. I will describe methods used to measure, localize and analyze neural response functions in the visual system. These methods will be illustrated with examples from our studies on the development of visual motion and text processing in children.



Presenter:  Joseph Dunlop
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Combined bootstrap and DISTATIS as a reliability measure for multivariate analysis: a neuroimaging example
 
Joseph Dunlop, The University of Texas at Dallas
Herv Abdi, The University of Texas at Dallas
Nils Penard, The University of Texas at Dallas
Alice O'Toole, The University of Texas at Dallas
 
Brain imaging datasets are difficult to analyze because they are very large and rectangular (i.e. the number of voxels is much larger than the number of images). The traditional approach to this problem computes one parametric statistic per voxel. Because it assumes voxel independence, this approach requires a drastic correction for multiple comparisons. An alternative to the voxel approach is pattern-based analysis, which minimizes the number of comparisons and takes advantage of the dependence between voxels. In one example of this approach, OToole et. al. (2005) re-analyzing data from Haxby et. al. (2001) used pattern-based classification to determine the functional distance between visual categories of objects. The data were fMRI scans from participants who viewed pictures from eight categories. The scans were processed by a classifier that predicted the category of the viewed pictures. The performance of the classifier was evaluated with a between category distance matrix (d matrix). Unfortunately, this d matrix has no associated measure of reliability, and therefore could not be used to test statistical hypotheses. Here, we use bootstrap resampling to create a non-parametric estimate of reliability for d matrices. To analyze a d matrix, we create many estimates of the d matrix using a bootstrap resampling protocol and then compare these estimates using a variant of multidimensional scaling (called DISTATIS). We then transform these d matrices into a map that displays the categories as confidence ellipses, with the best estimate of each category as the center of its ellipse.



Presenter:  Ehtibar Dzhafarov
Presentation type:  Talk
Presentation date/time:  7/26  11:20-11:45
 
A New Geometry Of Subjective Stimulus Spaces
 
Ehtibar Dzhafarov, Purdue
 
Most of mathematics has its roots in physics, having grown from the internal logic of physics problems. By contrast, the mathematics used in psychology is primarily "ready-made": it is adopted and adapted instead of growing from the internal logic of substantive issues. A recent development in Fechnerian Scaling breaks with this tradition by constructing a mathematical theory specifically aimed at the oldest problem of scientific psychology: the reconstruction of subjective distances among stimuli from their discriminability. The central notion of the theory is that of a dissimilarity function, which very likely captures all empirical measures of dissimilarity, whether "direct or computed from discrimination probabilities. The subjective distance between stimuli x and y is defined by the smallest amount of accumulated dissimilarity as one "moves" from x to y and back through intermediate stimuli. If stimuli form an arc-connected space in the topology induced by a dissimilarity function, the latter is used to construct a new mathematical theory for computing lengths of continuous paths. Most of the fundamental results of the traditional metric-based path length theory (additivity, lower semicontinuity, etc.) turn out to hold in the general dissimilarity-based path length theory. The triangle inequality and symmetry are not therefore essential for these results. In special arc-connected spaces (e.g., Euclidean n-spaces) the theory specializes to traditional versions of Finsler geometry. The latter, however, is arrived at rather than borrowed ready-made, distinguishing thereby the present development from the previous uses of differential geometry in psychology.



Presenter:  Kyler Eastman
Presentation type:  Talk
Presentation date/time:  7/27  11:20-11:45
 
Optimal Weighting of Speed and Accuracy in a Sequential Decision-Making Task
 
Kyler Eastman, University of Texas, Austin
Brian Stankiewicz, University of Texas, Austin
Alex Huk, University of Texas, Austin
 
Many sequential sampling models suggest decisions rely on the accumulation of evidence over time until reaching a particular threshold. These models can often account for variations of speed and accuracy in perceptual tasks. It has been hypothesized that the threshold maximizes an implicit reward function that incorporates both the speed and accuracy of the response (Gold & Shadlen, 2003). This approach has produced a family of models that can describe a variety of behaviors in two-alternative forced choice (TAFC) tasks (Bogacz, et al., 2006). We present a model of optimal sequential perceptual decision-making in a task that modifies the traditional TAFC by adding an option of acquiring additional information/samples at a cost (e.g., time). In the task, the observer receives a sample from two overlapping distributions. The observer can either declare which distribution is the sampled distribution, or they can choose to receive another sample. A reward structure specifies the costs for correct and incorrect answers along with the cost for each sample. The model adapts the drift-diffusion model (Ratcliff & Rouder, 1998) (Palmer, Huk, & Shadlen, 2005) for sequential decisions using a partially observable Markov decision process. The model provides a framework for evaluating the cost structures used by humans in a perceptual judgment task along with understanding the decision maker's sensitivity to different reward structures. The model also provides a mechanism for evaluating the effects of imperfect integration (memory limitations), variable signal strengths, and variations in the reward structure for human and optimal behavior.



Presenter:  Ido Erev
Presentation type:  Symposium
Presentation date/time:  7/27  9:25-9:50
 
Learning, Risk Attitude, and Hot Stoves in Partially Observable Markov Environments
 
Ido Erev, Technion
Guido Biele, Max Planck Institute for Human Development
Eyal Ert, Technion
 
This research examines decisions from experience in partially observable Markov decision processes (POMDPs). Two experiments revealed four main effects. (1) Risk neutrality: The typical participant did not learn to become risk averse, a contradiction to the hot stove effect. (2) Sensitivity to the transition probabilities that govern the Markov process. (3) Positive recency: The probability of risky choice to be repeated was higher after a win than after a loss. (4) Inertia: The probability of a risky choice to be repeated following a loss was higher than the probability of a risky choice after a safe choice. These results could be described with a simple contingent sampler model, which assumes that choices are made based on small samples of experiences contingent on the current state.



Presenter:  Ya'akov Gal
Presentation type:  Talk
Presentation date/time:  7/26  13:40-2:05
 
Modeling Reciprocal Behavior in Bilateral Negotiation
 
Ya'akov Gal, MIT and Harvard University
Avi Pfeffer, Harvard University
 
Reciprocity is a key determinant of human behavior. The ability to retaliate or reward others' actions in the absence of direct utility benefit has been shown to bring about and maintain cooperation, to induce punishment as well as forgiveness between players over time. The behavioral sciences literature has identified strategies that exhibit reciprocal qualities which under certain conditions satisfy conditions of optimality. These models were confined to simple, static interactions such as the prisoners' dilemma game, and assume rational behavior. Consequentially, they do not capture many of the dynamic settings and behavior that characterize human social interaction. This work proposes a model for bilateral interaction between people in an environment which provides an analog to real-world task settings. This environment varies players' possible strategies, dependency relationships and their rewards at each round. The model represents reciprocity as a tradeoff between two social factors: the extent to which players reward and retaliate others' past actions (retrospective reasoning), and their estimate about the future ramifications of their actions (prospective reasoning). Results show that a model that reasons about reciprocal behavior provides better predictive power than models that learn from people but do not reason about reciprocity, or play various game theoretic equilibria. In addition, retrospective reasoning was found to be more relevant than prospective reasoning in people's deliberation processes. These results suggest that the types of social factors that affect people's reciprocal interaction in the world can be defined and learned within a formal framework.



Presenter:  Dashan Gao
Presentation type:  Talk
Presentation date/time:  7/26  10:30-10:55
 
Decision-theoretic visual saliency and its implications for pre-attentive vision
 
Dashan Gao, UC San Diego
Nuno Vasconcelos, UC San Diego
 
A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition), is extended to encompass the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense, and the optimal saliency detector derived for the class of stimuli that comply with various known statistical properties of natural images. The optimal detector is shown to replicate the fundamental properties of the psychophysics of saliency. This includes pop-out, inability to detect feature conjunctions, saliency asymmetries with respect to feature presence vs. absence, compliance with Weber's law, and decreasing saliency with background heterogeneity. Finally, it is shown that the optimal detector has a one to one mapping to the standard architecture of primary visual cortex (V1), and can be applied to the solution of generic inference problems. In particular, for the broad class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.



Presenter:  John George
Presentation type:  Symposium
Presentation date/time:  7/26  14:05-2:30
 
Dynamic Functional Neuroimaging through Probabilistic Integration of Multiple Imaging Modalities
 
John George, Los Alamos National Laboratory
 
In spite of remarkable advances in neuroimaging technologies over the last two decades, no single method provides everything that we desire for basic research or best clinical practice. Structural Magnetic Resonance Imaging (MRI) provides exquisite images of the head and brain that provide a powerful anatomical framework for functional mapping. Functional MRI provides lower resolution functional images based on metabolic or hemodynamic responses to brain activation, but does not provide information on the timescales most relevant for neural function, and may not match activity maps based on electrophysiological criteria. Magneto- and electroencephalography (MEG and EEG) provide excellent measures of neural population dynamics, but neural electromagnetic source localization depends on model-based solutions of an ill-posed inverse problem. MRI can provide geometry and conductivity information to significantly improve biophysical models of the head volume conductor, required for source localization. Bayesian Inference techniques for MEG and EEG analysis provide a probabilistic approach for source localization and timecourse estimation, explicitly treating the ambiguity and uncertainty inherent in the inverse problem. Such methods allow formally rigorous strategies for integrating dynamic measures with spatial estimates of neural sources provided by fMRI or probabilistic functional atlas data, constrained by individual anatomy. Although these integrated methods fall short of true tomographic imaging they provide the best available techniques for noninvasive imaging of dynamic neural function. However, cutting edge methods may allow direct imaging of neural currents by MRI, eventually providing the ultimate tools for functional imaging of the human brain.



Presenter:  Richard Golden
Presentation type:  Talk
Presentation date/time:  7/28  10:30-10:55
 
Theorems Supporting Statistical Inference for Possibly Misspecified Models in the Presence of Missing Data
 
Richard Golden, University of Texas at Dallas
Steven Henley, Martingale Research Corporation
Halbert White, University California at San Diego
Michael Kashner, University of Texas Southwestern Medical Center
Robert Katz, Martingale Research Corporation
 
A unified asymptotic statistical theory is developed for making reliable statistical inferences for a large class of possibly misspecified regression models (e.g., linear, nonlinear, and categorical regression) with ignorable missing data mechanisms. The theory also handles misspecification of the missing data mechanism so that asymptotic inferences are reliable for nonignorable response data (i.e., Missing Not At Random or MNAR) statistical environments. In this talk, we present the key theorems of this new asymptotic theory for the special case of discrete random variables. Specifically, these new theorems establish asymptotic parameter estimate consistency and normality in the presence of model misspecification within MNAR statistical environments. In addition, explicit regularity conditions for the Orchard and Woodbury (1972; also see Louis, 1982) Missing Information Principle are provided which are directly relevant to possibly misspecified models within MNAR environments. The theory is directly applicable to a wide variety of modeling situations including problems dealing with verification and "work up" bias, parameter estimation and inference in Hidden Markov Models, random effects regression models, Item Response Theory, and missing data in surveys where question-content influences patterns of missingness.



Presenter:  Noah Goodman
Presentation type:  Talk
Presentation date/time:  7/26  15:35-4:00
 
A Rational Analysis of Rule-based Concept Learning
 
Noah Goodman, MIT
Joshua Tenenbaum, MIT
Jacob Feldman, Rutgers University
Thomas Griffiths, University of California, Berkeley
 
We propose a new model of human concept learning, the Rational Rules model, that provides a rational analysis for learning of rule-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space---a "concept language" of logical rules. We compare predictions of the model to human generalization judgments in several well-known category learning experiments, and find good agreement for both average and individual participants' generalizations. Several important concept learning effects emerge naturally in this framework. Prototype and typicality effects arise from uncertainty over the inferred definition of a concept; selective attention comes from uncertainty over production parameters of the probabilistic concept grammar. We conclude by discussing a natural extension of the model to relational features, and we describe learning of role-governed concepts---concepts defined by their role in a relational system.



Presenter:  Thomas Griffiths
Presentation type:  Symposium
Presentation date/time:  7/26  9:50-10:15
 
Monte Carlo and the Mind
 
Thomas Griffiths, University of California, Berkeley
 
(Note that this is part of the Modern Monte Carlo Methods invited symposium) Probability theory can be used in analyzing human cognition in two ways: as a hypothesis about how people represent degrees of belief and make inferences from data, or as part of statistical models that are used for data analysis. Both of these uses are relatively common, and methods for working with probability distributions are relevant to both. However, Monte Carlo methods are more commonly used in data analysis than as part of theories of cognition. I will talk about two ways that Monte Carlo methods connect to the mind: as methods for gathering information about subjective probability distributions, and as hypotheses about mechanisms that minds might use for simplifying the complex and computationally demanding task of performing probabilistic inference.



Presenter:  James Haxby
Presentation type:  Symposium
Presentation date/time:  7/26  14:45-3:10
 
Multivoxel pattern analysis: Methods for analysis of group data
 
James Haxby, Princeton University
Mert Sabuncu, MIT
Benjamin Singer, Princeton
Peter Ramadge, Princeton
 
Multivoxel pattern analysis (MVPA) detects distributed patterns of activity that distinguish among experimental conditions in fMRI experiments. MVPA has greater sensitivity than conventional analyses that are based on univariate statistical analysis of the time series for each voxel. Whereas conventional analyses ignore variations of response profiles across voxels within regions, MVPA detects information in these high spatial frequency features. MVPA is typically performed separately for each individual subject because methods for normalizing neuroanatomy are insufficient for aligning high spatial frequency functional topographies. Moreover, some topographies, such as that for orientation selectivity, have a dominant spatial frequency that is finer than the imaging matrix. MVPA detects these topographies as a highly aliased lower spatial frequency signal that cannot be aligned across subjects. I will present two methods for aligning data across subjects. In the first, functional topographies are aligned at a finer level of detail by using functional response as the basis for alignment. In our demonstration, we use the activity evoked by watching a movie for functional normalization. Because of the aliasing problem, however, purely mathematical methods are necessary for further alignment of individual functional brain spaces. In the second method, the multidimensional neural spaces defined by patterns of response to multiple conditions are analyzed as similarity structures for each individual subject. The similarity structures can then be analyzed to test a number of questions, such as 1. Variation by anatomical region, 2. Variation due to experimental manipulations such as training or attention, and 3. Variation due to group differences.



Presenter:  Yll Haxhimusa
Presentation type:  Symposium
Presentation date/time:  7/27  14:45-3:10
 
Solving the Euclidean TSP in the Presence of Input Errors
 
Yll Haxhimusa, Purdue University
Emil Stefanov, Purdue University
Zygmunt Pizlo, Purdue University
 
In real-life cases, like collecting balls scattered on a tennis court, humans are solving the Euclidean Traveling Salesman Problem (E-TSP) represented not by the actual positions of the balls, but by the perceived ones. The percept of the problem is a result of visual reconstruction based on retinal images in the observer's eyes. Because the reconstruction is not likely to be perfect, the input is corrupted with errors. As a result, the optimal solution for the perceived problem may not be optimal for the original one. We tested a pyramid and the Concorde algorithms on orthographic images of E-TSP. The algorithms solved not the original problems, but the projected ones, so the foreshortening of the image produced by the orthographic projection was as the source of input errors. TSP problems with 6, 10, 20, 50 and 100 cities were used. The tour, represented by the order of cities, produced by each algorithm for the projected problem was used to determine the solution error for the original problem. With small problems, the solution error produced by the pyramid algorithm was, in most cases, not greater than that of Concorde. With larger problems, Concorde outperformed the pyramid algorithm in most cases, but the difference in performance was small. These results suggest that in the presence of input errors, approximating algorithms may be preferable because they produce solutions quickly and the solution error may be comparable to that of an algorithm that produces optimal solutions.



Presenter:  Andrew Heathcote
Presentation type:  Talk
Presentation date/time:  7/28  10:55-11:20
 
State-Trace Analysis of the Face Inversion Effect
 
Andrew Heathcote, University of Newcastle, Australia
Scott Brown, University of Newcastle, Australia
John Dunn, University of Adelaide, Australia
 
We replicated Loftus et al. (2004, Experiment 1), comparing the differential effect on recognition memory accuracy of inversion for pictures of faces and houses, except that we used pictures of real faces rather than identikit faces and collected more data on each participant, allowing analysis of individual participant performance. We also ran a second between-subjects condition in which study and test orientation were the same to examine the effects of encoding specificity. The first design produced a stronger overall face inversion effect (FIE, i.e., a greater inversion effect for faces than houses) than found by Loftus et al., whereas the second design produced a weaker FIE. Graphical state-trace analysis (Bamber, 1979) supported a two-dimensional structure for the first design and a one-dimensional effect for the second design. We develop Bayesian statistical procedures to estimate the probability of one and two dimensional models and applied them to our data, and checked their performance in simulated data.



Presenter:  Sebastien Helie
Presentation type:  Talk
Presentation date/time:  7/26  9:50-10:15
 
Modeling the role of implicit processes in problem solving using a connectionist model
 
Sebastien Helie, Rensselaer Polytechnic Institute
Ron Sun, Rensselaer Polytechnic Institute
 
Many theories of problem solving have assumed a role for implicit cognitive processes. For instance, implicit processes are thought to generate hypotheses that are explicitly tested until a problem is solved (Evans, 1984, 2006). Also, Wallas' (1926) stage decomposition of creative problem solving included an implicit stage called 'incubation'. As a result, implicit knowledge is thought to be responsible for many correct solutions when solving insight problems. In this presentation, we propose a two-level connectionist model composed of a regular two-layer network and a Hopfield-type network. The former is linear and represents information locally to model associations in explicit memory. In contrast, the Hopfield-type neural network is non-linear and uses randomly generated distributed representations to model implicit knowledge. This representational difference is believed to reflect the difference in accessibility of explicit and implicit knowledge. The networks are connected to form a bidirectional associative memory. The stimuli are processed in both networks simultaneously until convergence and their outputs are integrated using a Bayesian function. Insight is modeled by the crossing of a threshold by the integrated output's activation. If the integrated output's activation is not sufficient to produce insight, the output of the model is used as the input for another iteration of processing. This model setting was used to simulate the hypothesis generation process in insight problem solving and the effect of incubation in a lexical decision task.



Presenter:  Pernille Hemmer
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
The Effect of Prior Knowledge on Memory for Events
 
Pernille Hemmer, UC Irvine
Mark Steyvers, UC Irvine
 
Size judgments are known to be influenced by existing knowledge. This provides a natural means for examining the interaction between prior knowledge and size judgments in recall memory tasks. We present results from a series of experiments that demonstrate the degree to which people use their prior knowledge to recall the size of studied objects. The results show that the participants remembered-size judgments regress towards the prior mean size of the object. We hypothesize that the reproduction of past stimuli can be decomposed into three components: prior knowledge, episodic trace, and noise in recall. Estimates for the relative contributions of these three factors are obtained through a Bayesian estimation procedure.



Presenter:  Marc Howard
Presentation type:  Talk
Presentation date/time:  7/27  14:45-3:10
 
Vector spaces in the brain: Multivariate neural responses as a step toward physical models of memory
 
Marc Howard, Syracuse University
 
Distributed memory models (DMMs) describe the process of encoding and retrieval of information as operations taking place on vectors in a high-dimensional space. A physical model of memory would not only provide a quantitatively acceptable model of behavior, but also an accurate model of the actual computations taking place in the brain that support this behavior. Although it is difficult to specify what DMMs predict for the activity of single neurons, existing technologies make it possible to measure multivariate responses from multiple neurons (tetrode or silicon arrays) or patches of cortex (fMRI or optical imaging). DMMs can make natural predictions about the relationship of these ensemble responses corresponding to different study and/or encoding events. A recent study (Manns, Howard, & Eichenbaum, SfN 2006) applied this strategy to examining predictions of the temporal context model in a judgment of recency task in behaving rats implanted with tetrode arrays. This work suggests that the hippocampus has access to a representation of spatio-temporal context during performance of this task. More broadly, it illustrates the ability of DMMs to contribute to our understanding of the neural substrates of memory, and the potential for ensemble recordings to constrain DMMs.



Presenter:  Yung-Fong Hsu
Presentation type:  Talk
Presentation date/time:  7/27  9:25-9:50
 
A media-theoretical semiorder model of persuasion with an application to panel data
 
Yung-Fong Hsu, National Taiwan University, Taiwan
Michel Regenwetter, University of Illinois at Urbana-Champaign
 
Stochastic media theory is a class of stochastic models of persuasion developed by Falmagne and his colleagues. These models assume that personal preferences are represented by rankings (e.g., (strict) weak orders, semiorders) that may change over time, under the influence of "tokens" of information in the environment. Empirical applications of some weak order implementations to the U.S. presidential election panel data have been discussed in Regenwetter, Falmagne, and Grofman (1999) and Hsu, Regenwetter, and Falmagne (2005). Recently, Hsu and Regenwetter (in press) also tried out a simple semiorder implementation to the same data sets. The election panel data were recorded using the Feeling Thermometer ratings, which have a natural transformation into weak orders. No such transformation is available in the case of semiorders, because each respondent may have a personal threshold. To deal with such situation, we investigate a 'random threshold' probabilistic response mechanism for the semiorder model. This response mechanism, along with the semiorder model, is applied to the 1992, 1996, and 2000 U.S. presidential election panel data.



Presenter:  Xiangen Hu
Presentation type:  Talk
Presentation date/time:  7/27  10:55-11:20
 
Statistical Closure of MPT Models under Parameter Constraints
 
Xiangen Hu, The University of Memphis
William Batchelder, University of California, Irvine
 
The class of binary multinomial processing tree (BMPT) models is characterized by binary links at non terminal nodes, each associated with a parameter. The parameters are functionally independent and each is free to vary in the open unit interval. Previous work has shown that this class is statistically closed under some types of parametric constraints. By statistically closed is meant that when a certain parametric constraint is imposed, the constrained model, while not a BMPT, is nevertheless statistically equivalent to a model which is a BMPT. The closure theorems studied involve both dimension reducing constraints (Hu & Batchelder, 1994, Psychometrika) and order constraints (Knapp & Batchelder, 2004, J. Math. Psych.). These results allow certain statistical hypotheses to be handled within a general MPT inference scheme based on the EM-algorithm. This paper generalizes BMPT models to allow non terminal nodes to have multiple links. Multi-link MPT models cover a number of applications in cognitive modeling, e.g. source monitoring, and they are typical of tree models in statistical genetics, e.g. the ABO blood group model. For multi-link MPT models, the parameters are functionally independent pdfs with spaces corresponding to simplexes of various dimensionalities. The paper provides statistical closure theorems for dimension reducing constraints as well as order constraints both within and between parameter vectors. The results presented in this paper constitute a theoretical foundation for hypothesis tests for multi-link MPT models.



Presenter:  David Huber
Presentation type:  Talk
Presentation date/time:  7/27  15:35-4:00
 
A Stochastic Judgment Model of Recall: Separating Measurement, Memory, and Correlation
 
Yoonhee Jang, University of California, San Diego
David Huber, University of California, San Diego
Tom Wallsten, University of Maryland, College Park
 
Theoretical accounts of episodic recall typically assume that recall is an accurate all-or-none process. However, recent results often suggest a very different picture in which recall is fallible and graded along different dimensions. In order to foster new theoretical accounts of episodic recall, it's necessary to collect supplemental judgments both prospectively (e.g., judgments of learning) and retrospectively (e.g., judgments of confidence or source). For these judgments, signal detection theory is inappropriate because the classes of items (recalled versus non-recalled) are determined by the responder rather than through some external manipulation. In order to relate these judgments to the underlying memory distributions, we developed a new detection model that consists of 1) a criterial detection process for the judgments; 2) a criterial detection process for recall; and 3) some relationship (correlation) between the distributions that support these two detection processes. Variability in the judgment criteria implies inconsistent scale use (measurement) and variability in the recall criteria implies inconsistent retrieval strategies (memory). In sum, these 3 sources of inconsistency may contribute to a relative lack of correspondence between judgments and recall. In a series of empirical and computational studies, we investigated the validity of this model and its implications for episodic recall.



Presenter:  Geoffrey Iverson
Presentation type:  Talk
Presentation date/time:  7/28  11:20-11:45
 
Test statistics, p-values and Bayes Factors
 
Geoffrey Iverson, U C Irvine
 
Most empirical research continues to be reported in terms of classical test statistics such as t or F, and their associated p-values. The evidentiary content of p-values (small values of p are supposed to be decisive for rejecting a null hypothesis) has been severely questioned by Bayesian theorists, and for good reason. It turns out however that an examination of the distribution of p-values under typical alternate hypotheses links p-values to Bayes factors. A typical ANOVA can thus be reported in terms of F values together with associated Bayes factors. Until our empirical colleagues commit to a fully Bayesian analysis of their data, they can in the interim quickly learn to compute Bayes factors to assist in the analysis and interpretation of their data.



Presenter:  Yoonhee Jang
Presentation type:  Talk
Presentation date/time:  7/26  9:25-9:50
 
Testing the unequal-variance, dual-process, and mixture signal-detection models in yes/no and two-alternative forced-choice recognition
 
Yoonhee Jang, University of California, San Diego
John Wixted, University of California, San Diego
David Huber, University of California, San Diego
 
Three models have been advanced to explain the asymmetrical ROCs that are commonly observed on recognition memory tasks. One model, the unequal-variance signal-detection (UVSD) model, assumes that recognition decisions result from a strength-based process that is governed by two unequal-variance Gaussian distributions. A second model, the dual-process signal-detection (DPSD) model, assumes that recognition decisions are sometimes based on a threshold-recollection process and otherwise rely on a strength-based (familiarity) process. A third model, the mixture signal-detection (MSD) model, holds that recognition memory decisions are based on a continuous memory strength variable, but the old item distribution consists of a mixture of two equal-variance Gaussian distributions with different means: the higher mean distribution for attended items and the lower mean distribution for partially attended items. We tested the ability of these three models to predict two-alternative forced-choice (2AFC) recognition performance based on an ROC analysis of yes/no recognition performance. While all three were able to predict 2AFC performance to some degree, the UVSD model explained more variance than either the DP or MSD model. In addition, the specific model-based parameter estimates were more sensible for the UVSD model than for the other two models. The issue on theoretical validity and model flexibility will be discussed.



Presenter:  Michael Jones
Presentation type:  Symposium
Presentation date/time:  7/28  9:25-9:50
 
Bridging semantic representation and associative memory theory
 
Michael Jones, Indiana University
 
Contemporary models of lexical representation have a major advantage over traditional models in that they are able to learn representations from statistical information in the environment rather than relying on hand-coded representations based on intuition. However, these methods are still fundamentally based on algorithms from document retrieval (e.g., Salton & McGill, 1983). In this talk, I will outline the BEAGLE model (Jones & Mewhort, 2007, Psyc Rev), an attempt to build high-dimensional semantic representations for words using mechanisms adapted from associative memory theory (cf. Murdock, 1982). The model represents contextual co-occurrence and word order information in a single holographic vector per word using superposition and convolution mechanisms that have proven effective at modeling human learning and memory in a variety of other domains. The additional word order information gives the model a higher fidelity lexical representation than co-occurrence alone, which is beneficial in several tasks. Further, the learning mechanism can be inverted to retrieve sequential dependency information from the semantic representations. The model will be trained on text corpora and the similarity of the resulting representations will be compared to human data in tasks involving semantic judgments, priming, comprehension, and the time course of semantic acquisition.



Presenter:  Michael Kahana
Presentation type:  Plenary Speech
Presentation date/time:  7/26  16:30-5:30
 
Associative Processes in Episodic Memory
 
Michael Kahana, University of Pennsylvania
 
Association and context constitute two of the central ideas in the history of memory research. Following a brief discussion of the history of these ideas, I will review data that demonstrate the complementary roles of temporal contiguity and semantic relatedness in determining the order in which subjects recall items and the timing of their successive recalls. These analyses reveal that temporal contiguity effects persist over very long time scales, a result that challenges traditional psychological and neuroscientific models of association. The form of the temporal contiguity effect is conserved across all of the major recall tasks and even appears in item recognition when subjects respond with high confidence. The near-universal form of the contiguity effects and its appearance at diverse time scales is shown to place tight constraints on major theories of association. Howard & Kahana's (2002) temporal context model accounts for these phenomena by combining a mathematical model of contextual coding with a contextual retrieval mechanism. I will present a recent extension of TCM that uses a set of competitive leaky accumulators to explain the temporal dynamics of item retrieval, and I will discuss how this theory might be further tested using neuroscientific methods.



Presenter:  Woojae Kim
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Model Selection with Data under Individual Differences
 
Woojae Kim, Indiana University
Richard Shiffrin, Indiana University
 
Hierarchical modeling has been demonstrated as a good way of modeling data with individual differences (Rouder and Lu, 2004; Navarro et al., 2006). In a situation where the size of data available from each subject is small and individual differences clearly exist, hierarchical modeling provides far more accurate model estimation than modeling either individual or averaged (or aggregate) data. Can we expect the same kind of benefit from hierarchical modeling for the model selection problem? That is, does a model selection judgment made with hierarchical models represent a better decision than that with models of individual or aggregate data in a situation like the above? The present study investigates this question. By taking models from different modeling areas and employing simulation approaches, this study evaluates the decision performance of model selection with hierarchical models, in comparison to model selection with models of individual and aggregate data. Predictive accuracy, which is operationalized by the discrepancy of the selected model from the true, generating model, is used as a criterion for the decision performance. The simulation design includes the variation of sample size within a subject and the different degrees of individual differences. The results demonstrate that hierarchical modeling provides better decision making for model selection.



Presenter:  Woojae Kim
Presentation type:  Talk
Presentation date/time:  7/28  9:00-9:25
 
Understanding the Connectionist Modeling of Quasiregular Mappings in Reading Aloud
 
Woojae Kim, Indiana University
Mark Pitt, Ohio State University
Jay Myung, Ohio State University
 
The connectionist approach to reading aloud has been a serious challenge to the traditional dual-route theory, but a critical question concerning the theoretical distinction of the connectionist approach from the dual-route theory remains unresolved. That is, through what kind of internal structure a single-route connectionist model represents the two seemingly distinct kinds of ability to process regularities and exceptions without relying on dual-route structure? By taking a model from Plaut et al. (1996) and examining it closely, the present study attempts to answer this question. Various forms of network analysis demonstrate that the representational system in hidden unit space is structured in the same way regardless of learning regularities or exceptions. Further analyses about the effect of the reading network's exception learning upon its nonword reading reveal a proper viewpoint on the connectionist mechanism for a quasiregular task. Unlike the dual-route assumption, exception learning in connectionist modeling of reading aloud does affect the model's nonword reading performance. This is analogous to the phenomenon that "noise capturing" or "overfitting" in statistical modeling affects the model's generalization performance. In reality, however, the severity of "ordinary exceptions" in normal word reading happens to be not high enough to ruin the network's nonword reading, as "noise" does in statistical modeling.



Presenter:  Krystal Klein
Presentation type:  Talk
Presentation date/time:  7/27  9:50-10:15
 
Cross-Situational Statistical Word Learning Tasks: Modeling Overt Responses and Eye Movement Data
 
Krystal Klein, Indiana University
Chen Yu, Indiana University
Richard Shiffrin, Indiana University
 
Recent studies (e.g. Yu & Smith, 2007) show that both adults and young children can utilize cross-situational statistical information to build word-to-world mappings and solve the reference uncertainty problem in language learning. Our recent simulation work implemented several computational to take into account constraints of human learners, such as attentional limitations, embodied selection of visual information, and forgetting of stored information over time; all these models can fit behavioral data well. To distinguish between those models, this work reports the results from a new series of experiments in which adults are exposed to a rapid series of learning trials, wherein any given training trial contains uncertainty in sound-to-picture mappings, but in which this uncertainty is resolved across multiple trials. Several models of learning strategy are proposed and tested using variations of the original task (Yu and Smith, 2007) that provide more constraining data than merely percent of words learned, including (1) ranking results of options at testing, (2) effects of acquired knowledge to subsequent learning, and (3) effects of frequency variations and violations of the mutual exclusivity constraint. More importantly, we measure learners' moment-by-moment eye movements to (1) correlate eye movement patterns with learning results,(2) measure dynamic changes of eye movements from first trials to later trials, (3) encode this information in our models to infer underlying learning mechanisms based on the synchrony between eye movements and speech perception, and (4) evaluate which model is more cognitively plausible.



Presenter:  Janne V. Kujala
Presentation type:  Symposium
Presentation date/time:  7/26  10:30-10:55
 
Sequential Monte Carlo for Bayesian Adaptive Estimation
 
Janne V. Kujala, University of Jyväskylä
 
Until recently, adaptive psychophysical estimation methods have been based on models with only few unknown variables and with the stimuli varying over one dimension only. One reason for not using more complicated models is that the straighforward grid sampling of the parameter space employed by most Bayesian estimation methods grows exponentially with the number of parameters. In some Bayesian estimation contexts, such as radar tracking, sequential Monte Carlo (SMC) algorithms have been able to overcome this curse of dimensionality. However, in psychometric measurement, the underlying model is decidedly different---the unknown parameters do not change in time, which leads to sample degeneration in most SMC algorithms. To avoid the degeneracy in static models, Chopin (2002) combines a SMC algorithm with Markov Chain Monte Carlo. In (Kujala and Lukka, JMP 2006) a similar algorithm is proposed for Bayesian adaptive estimation. The price to pay is quadratic scaling with the number of observations as opposed to the linear scaling of typical SMC algorithms or the straightforward grid sampling. However, in psychometric measurement, the number of trials is relatively small so this is generally a small price. The algorithm is applied to estimation of discrimination threshold contours around target colors in a color plane. At the four varied dimensions of this model, the SMC algorithm is on a par with a highly optimized version of the deterministic grid sampling algorithm; with more dimensions, the grid sampling is approaching its limit while the Monte Carlo approach is expected to scale much more gracefully.



Presenter:  Janne V. Kujala
Presentation type:  Talk
Presentation date/time:  7/27  10:55-11:20
 
A principle for child-friendly adaptation in learning games
 
Janne V. Kujala, University of Jyväskylä
Heikki Lyytinen, University of Jyväskylä
 
Learning games typically employ adaptation rules such as increasing the difficulty of the learning tasks after correct answers and decreasing it after incorrect answers so as to maintain a certain prescribed success rate. However, even though a success rate within certain bounds may be necessary, the ad hoc adaptation rules are in fact not very efficient for reaching it, and any success rate by itself cannot guarantee good learning results. Thus, a more general principle for adaptation is called for. In this work, we approach the problem from the mathematically solid foundation of Bayesian adaptive estimation. Our key hypothesis is that the contents for learning tasks that yield the most *new* information about the skills of the child, while being desirable for measurement in their own right, would also be among those that are effective for learning. Indeed, optimization of the informativity appears to naturally avoid tasks that are exceedingly difficult or exceedingly easy as the model can predict the results of such tasks to be correct or incorrect, respectively, and so the actual answers would yield little new information. However, as failures can easily lower motivation, we propose the more child friendly objective of optimizing the expected information gain divided by the expected failure rate, i.e., the cost of the information is measured as the number of failures.



Presenter:  Oh-Sang Kwon
Presentation type:  Talk
Presentation date/time:  7/26  9:25-9:50
 
Pyramid model of the transfer of skilled movement
 
Oh-Sang Kwon, Purdue University
Zygmunt Pizlo, Purdue University
Howard Zelaznik, Purdue University
George Chiu, Purdue University
 
Generalized Motor Program theory (Schmidt 1975) has been the most influential theory of the transfer of motor learning. The theory suggested that a motor program is represented in a temporal structure namely 'relative timing' which is defined by a ratio of the submovement duration to the total movement duration. According to the relative timing model, a motor program can be easily transferred to similar movements, if the relative timing is preserved. This model has problems. The invariance of relative timing is often violated in empirical data, and, more importantly, the relative timing model says nothing about the spatial aspects of the motor programs. In this presentation, we suggest a pyramid model, which is well established in visual perception, to explain the spatial structure of motor program. Due to the self-similar structure of the pyramid model, a motor program should be easily transferred when the ratio between the size (D) and accuracy (A) of movements are preserved. The role of the ratio (D/A) in transfer was tested in eight conditions. The movement and the target size were changed by a factor of two. The eight conditions represented all cases of the target and movement size changes. Results supported pyramid model showing that the transfer was the most efficient when the ratio between the size and the accuracy of movement was preserved. Further analysis of the temporal structure of data showed that the relative timing couldn't explain the results.



Presenter:  John LaMuth
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
A Psycho-Linguistic Model for Behavioral Ethics
 
John LaMuth, JLM Mediation S.
 
A new model of ethical behavior, described as a ten-level meta-hierarchy of the major groupings of virtues, values, and ideals, serves as the foundation for a mathematical model of ethics and morality. This innovation arises as a direct outcome of the Systems Theory notion of the metaperspective (a higher-order perspective upon a viewpoint held by another). The traditional groupings of virtues and values are collectively arrayed as subsets within a hierarchy of metaperspectives, each more abstract listing building directly upon that which it supersedes. For example, the cardinal virtues (prudence-justice-temperance-fortitude), the theological virtues (faith-hope-charity-decency), and the classical Greek values (beauty-truth-goodness-wisdom). Each of these groupings is split into a complex of four subordinate terms, allowing for precise, point-for-point stacking within the hierarchy of metaperspectives. When additional groupings of ethical terms are further added into the mix: namely, the personal ideals (glory-honor-dignity-integrity), the civil liberties (providence-liberty-civility-austerity), the humanistic values (peace-love-tranquility-equality), and the mystical values (ecstasy-bliss-joy-harmony), amongst others; the complete ten level hierarchy of metaperspectives emerges in full detail. This cohesive hierarchy of virtues and values proves particularly comprehensive in scope, accounting for the major terms celebrated within the Western tradition: mirroring the specialization of personal, group, spiritual, humanitarian, and transcendental realms within human society as a whole, modeled in terms of US Patent 6587846 as a mathematical simulation of ethical artificial intelligence. www.ethicalvalues.info



Presenter:  Thomas Landauer
Presentation type:  Symposium
Presentation date/time:  7/28  9:00-9:25
 
Fundamental Issues in Modeling Language
 
Thomas Landauer, Pearson Knowledge Technologies
 
Current models do a good job of learning to estimate the similarity of word and passage meanings from exposure to the same inputs from which humans do, and can thereby simulate comprehension quite well. No models can yet learn from natural input (with no help from human descriptive rules or annotations) how to use syntax to improve comprehension or to compose text that syntactically and semantically expresses new "ideas" similarly well.



Presenter:  Michael Lee
Presentation type:  Symposium
Presentation date/time:  7/27  9:50-10:15
 
A Hierarchical Bayesian Account of Human Decision-Making Using Wiener Diffusion
 
Michael Lee, University of California, Irvine
Joachim Vandekerckhove, K.U. Leuven
Daniel J. Navarro, University of Adelaide
Francis Tuerlinckx, K.U. Leuven
 
We present a fully Bayesian approach to using Wiener diffusion as an account of the time-course of two-alternative decision-making. Using graphical modeling, and MCMC methods to draw posterior samples, we reconsider the seminal data of Ratcliff and Rouder (1998), who tested three observers in a brightness discrimination task under both speed and accuracy conditions. Our model employs hierarchical Bayesian methods to model the psychophysical relationship between stimulus properties and drift rates, and relies on latent assignment methods to infer contaminants in the data. We find evidence, consistent with the original analysis, that task instructions affect boundary separation, and that the model accounts for decisions and response time distributions well. But we also observe a number of results that are inconsistent with the original analysis, relating to the psychophysical function, and the nature of the theoretically important cross-over effect. We also show that our Bayesian approach has the potential to estimate model parameters accurately using a small fraction of the original data set.



Presenter:  Yunfeng Li
Presentation type:  Talk
Presentation date/time:  7/27  14:05-2:30
 
A Bayesian model of 3D shape reconstruction involving symmetry, planarity and compactness constraints
 
Yunfeng Li, Purdue University
Zygmunt Pizlo, Purdue University
 
It is known that an orthographic image of a 3D symmetrical shape determines a one-parameter family of 3D symmetrical interpretations. Our previous study indicates that in most cases subjects' percept corresponds to the 3D shape from this family, whose compactness is maximal (Maximum Compactness Model, M2C, Li & Pizlo, 2007). In some cases, however, the perceived shape is not maximally compact: its range in depth is smaller than that of the maximally compact shape. To account for this result, we formulated a Bayesian version of M2C (BM2C). Two psychophysical experiments were performed to test these models. In Experiment 1, orthographic images of random 3D symmetrical shapes with random 3D orientations were generated. For those shapes the reconstructions of the two models are usually similar and they closely match the original shapes. Subjects were asked to adjust the unique parameter that characterizes the family of the 3D interpretations, until the reconstructed 3D shape was the same as their percepts. The perceived shapes closely matched the original shapes. In Experiment 2, the 3D shapes and their orientations were not random. Instead, they were selected to produce varying degree of differences in the reconstructions of the two models. Subjects' task was the same as that in Experiment 1. The perceived shapes were much closer to the 3D shapes reconstructed by BM2C than to those reconstructed by M2C.



Presenter:  Anli Lin
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
Equating Designs Comparison: Matched-Sample versus Common-Person Design
 
Anli Lin, Harcourt Assessment, Inc
Don Meagher, Harcourt Assessment,Inc
Christina Stellato, Harcourt Assessment,Inc
 
The purpose of this study was to develop an effective way to get equivalent groups for equating purposes using a matched-sample method, and to compare equating results obtained by using matched samples with equating results obtained by using a common-person design. The results of this study show that an equating parameter derived from the matched-sample method is very close to an equating parameter derived from a common-person design: the difference between the two designs is only about 0.1 scaled score unit (mean 400; standard deviation 25). In the paper, we consider the theoretical assumptions that support using the matched-sample method to determining equivalent groups and discuss the practical operations involved in using this method. Our findings suggest two major advantages to using the matched-sample method as compared to a common-person design. First, the data available to use with the matched-sample method comes from a large sample size, which makes the results more accurate and reliable than would be the case with a smaller sample size, which is often for common person design. The second advantage is that because we are able to use existing data with the matched-sample method, we do not need any extra design and operational efforts to collect common person data, which saves the time and money involved in conducting a research study in the field. For these reasons, we recommend using the matched-sample method whenever existing data is available, especially for renorming an existing test.



Presenter:  Anli Lin
Presentation type:  Talk
Presentation date/time:  7/28  9:25-9:50
 
Standard Error Analysis with Bootstrap and IRT Model
 
Anli Lin, Harcourt Assessment, Inc
Don Meagher, Harcourt Assessment, Inc
Christina Stellato, Harcourt Assessment, Inc
 
The purposes of this study are: a) to find a relationship between standard error of scaled scores and sample size; b) to compare the standard error of scaled scores obtained by sampling the data before calibration with the standard error of scaled scores obtained by sampling the data after calibrations; c) to compare the standard error of rounding scaled scores with the standard error of un-rounding scaled scores. Standard error was calculated with bootstrapping. Rasch model was used to calibrate with SAS program. Major results: a) For standard error trend, as sample sizes are less than 500, standard error drops fast as sample size increases; as sample sizes are between 500 and 1000, standard error changes slowly as sample size changes; standard error is relative stable as sample size is lager than 1000. b) Sample size could be decided based on the tolerance of standard error; c) Confidence intervals were calculated with sample sizes for reference of sample size decision; d) Sampling with IRT will needs more samples as compared to sampling without IRT to get same standard error; e) When samples size is small, standard errors of rounding scaled scores and un-rounding scaled scores have similar patterns; When sample size is large, standard errors of rounding scaled scores and un-rounding scaled scores have significant different patterns.



Presenter:  Zhong-Lin Lu
Presentation type:  Symposium
Presentation date/time:  7/26  13:15-1:40
 
Introduction to MRI and fMRI
 
Zhong-Lin Lu, University of Southern California
 
Functional Magnetic Resonance Imaging (fMRI) has become an important tool in studying the neural basis of human behavior. In this talk, I will briefly introduce the physical principles underlying Magnetic Resonance Imaging (MRI), the physiological basis of functional Magnetic Resonance Imaging (fMRI), and the basic experimental paradigms and data analysis techniques in fMRI. I will also highlight the challenges and needs for developing novel data analysis and modelling techniques in brain imaging research



Presenter:  William Messner
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Evaluating the Priority Heuristic: A Comparison of Decision Making Models
 
William Messner, University of Ilinois at Urbana-Champaign
Michel Regenwetter, University of Ilinois at Urbana-Champaign
Clintin Davis-Stober, University of Ilinois at Urbana-Champaign
 
In their 2006 Psychological Review paper, Brandsttter, Gigerenzer, and Hertwig propose the Priority Heuristic, a model of decision making over paired prospects. They compare the Priority Heuristic to several models of decision making (e.g. Cumulative Prospect Theory). We discuss several problems with their method and propose an alternate framework to compare decision models.



Presenter:  Jonathan Miller
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
Clustering by spatial proximity during memory search
 
Jonathan Miller, University of Pennsylvania
Sean Polyn, University of Pennsylvania
Michael Kahana, University of Pennsylvania
 
A major principle of episodic memory is that stimuli are associated with the context in which they are experienced. Many variables may make up this context, including internal states such as mood, and external states such as the surrounding environment. By manipulating elements of the context in which memories are stored, we can gain insight into the mechanisms by which stimuli are organized in memory. While a large literature has examined the effect of large changes in environmental context on memory performance in free recall (see Smith, 1988 for review), less attention has been paid to the influence of spatial proximity in this paradigm. In the current experiment, subjects deliver items to a series of locations in a virtual town, playing the role of a delivery person. At the end of each list the subject is asked to recall the set of delivered objects. We used a permutation procedure to assess the degree to which successive recalls came from nearby spatial locations. It was found that there was indeed a tendency for subjects to cluster the recall of items by spatial proximity. These results are consistent with a model of the memory system in which items are associated with a spatial representation that is similar for nearby points in the environment, which then can serve as a cue influencing memory retrieval.



Presenter:  Maximiliano Montenegro
Presentation type:  Talk
Presentation date/time:  7/27  10:30-10:55
 
Generalized Context Model as a locally neighborhood based classifier
 
Maximiliano Montenegro, Ohio State University
 
In this work, we present a novel interpretation of GCM (Generalized Context Model; Nosofsky, 1986) as a locally neighborhood based classifier, where the classification of a stimulus in a class depends on which region in psychological space it belongs. Specifically, it is shown that GCM approximate the domain of a category in the psychological space as the union of a finite set of smaller regions that surround the exemplars, whose shape depends on the parameters chosen.



Presenter:  Bennet Murdock
Presentation type:  Talk
Presentation date/time:  7/27  15:35-4:00
 
Update on the TODAM Working Memory model for serial-order effects
 
Bennet Murdock, University of Toronto
 
The TODAM Working Memory model is a process model that deals with the encoding, storage and retrieval of item and serial-order information in short-term memory. I have been able to improve on several weaknesses of the previous version, so my paper will be a review and progress report on the current state of the model.



Presenter:  Ryan O. Murphy
Presentation type:  Symposium
Presentation date/time:  7/27  15:10-3:35
 
An Admissions Problem: Selecting a portfolio of risky binary options
 
Ryan O. Murphy, Columbia University
J. Neil Bearden, INSEAD
 
We study a problem where a DM is simultaneously presented with a set of risky binary options that concurrently increase in their value, but decrease in their probability of being realized. Further, the DM operates under the restriction of being able to accept only the most valuable option that is realized. In a single stage the DM is called upon to select a set of risky options where each selection is costly. Such decision scenarios have natural analogues in the complex world. Take for example the process of a student applying for admission to graduate school. Here the options are differently valued (some schools are more preferred than others), differently likely (some schools are harder to get in to than others). Additionally these features are negatively correlated and the student can only go to, at most, one school. We refer to this decision context as an Admissions Problem. We present data from a laboratory experiment where financially motivated DMs were presented with a variety of Admission Problems with varying probability structures and contrast these results to the normative solution. Results indicate that DMs are generally under-applying (not selecting enough risky options from a set). This bias is persistent even with experience and feedback. Generally DMs did not select risky options with middle-high ranks when they should have. Conversely DMs did select lower ranked options when they should not have. These results are consistent with well known decision biases including risk and loss aversion.



Presenter:  Jay Myung
Presentation type:  Symposium
Presentation date/time:  7/26  10:55-11:20
 
Optimizing Experimental Designs for Model Discrimination
 
Jay Myung, Ohio State University
Maximiliano Montenegro, Ohio State University
Mark Pitt, Ohio State University
 
In this study we explore statistical methods for optimizing an experimental design to distinguish between competing models. Information about model performance and the experimental design are integrated to identify variable settings that will maximally discriminate the models. The problem of design optimization is challenging because of the many, sometimes arbitrary, choices that must be made when designing an experiment. Nevertheless, it is generally possible to find a design that is optimal in a defined sense. For example, in designing an experiment that investigates retention, the experimenter must choose the number of time intervals between the study and test sessions and the actual time values when memory is probed. Design optimization methods provide a framework for exploiting this information for the purpose of improving model discrimination. In this talk, we will review various Monte Carlo approaches to design optimization that have been proposed in the literature, and present preliminary results from our own applications of such approaches to discriminating between retention, and others models of cognition, if time permits.



Presenter:  Angela Nelson
Presentation type:  Talk
Presentation date/time:  7/26  10:30-10:55
 
A Contextual Diversity Account of Frequency Effects
 
Angela Nelson, Indiana University
Richard Shiffrin, Indiana University
 
Frequency effects in memory and perceptual tasks have been shown to occur for novel items trained to differential degrees (Nelson & Shiffrin, 2006). Because each novel item is randomly assigned to a frequency category, these results are inconsistent with the REM model account of frequency (Shiffrin & Steyvers, 1997), which assumes that higher frequency items are composed of higher frequency features. We present a new model through which the lexical representation of the novel items develops over training. Each item's lexical representation is composed of features that are stored not only from the item itself, but also from the item's contextual surroundings. Since the higher frequency items are seen in a larger variety of contexts than lower frequency items, the higher frequency items develop a more diverse representation in the lexicon. The model is shown to account for the frequency effects found by Nelson and Shiffrin (2006).



Presenter:  Lance Nizami
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
A hidden limit and some latent flaws of published versions of the Green/deBoer Signal Detection Theory model of the difference limen for white noise
 
Lance Nizami, Boys Town National Research Hospital
 
The amplitude of white noise is Gaussian-distributed. That randomness may limit the detectability of an intensity change from interval to interval. Green (1960) and deBoer (1966) assumed that the detected change was that of average power, which for a single listening interval T, and noise bandwidth W, is the integral of the squared amplitude divided by T, approximated by (1/2WT) times the sum of 2WT samples of the square of the amplitude. Noises of differing intensities result in Gaussian-distributed average-power distributions having differing means and variances. The Signal Detection Theory "dprime" equals the difference between two means, divided by the square root of half of the sum of the two variances. dprime, and the increment power divided by the noise power, called S/N, are both unitless positive real numbers, setting a maximum to dprime. The difference limen (DL) is a simple logarithmic equation in S/N. Psychophysical studies show that the empirical DLs exceed the predicted DLs. To eliminate that discrepancy, deBoer (1966) and Raab & Goldberg (1975) added a Gaussian-distributed, zero-mean “physiological noise” to the Gaussian-distributed average powers. But their corrected dprime does not produce a real-valued and positive S/N. Green & Swets (1966/1988) and Shofner, Yost, & Sheft (1993) proposed an alternative incorporation of “physiological noise”. But their dprime's can only be derived with fewer than 2WT samples of noise amplitude. These latent flaws may explain the poor fit of all these authors' DL equations to their empirical DLs.



Presenter:  Jeffrey O'Brien
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
The P-rep Statistic as a Measure of Confidence in Model Fitting
 
Jeffrey O'Brien, University of California, Santa Barbara
F. Gregory Ashby, University of California, Santa Barbara
 
In traditional statistical methodology (e.g., analysis of variance), confidence in the observed results is often assessed by computing power. In most cases, adding more participants to a study will improve power more than increasing the amount of data collected from each participant. Thus, traditional statistical methods are biased in favor of experiments with large numbers of participants. Here we propose a method for computing confidence in the results of experiments in which much data is collected from few participants. In such experiments it is common to fit a series of mathematical models to the resulting data and to conclude that the best fitting model is superior. The probability of replicating this conclusion (i.e., Prep) is derived for any two nested models. Simulations and empirical applications of this new statistic confirm its utility as an alternative to power analyses in studies where much data is collected from few participants.



Presenter:  Peter C. Pantelis
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
Why are some faces' names easier to learn than others? The effects of similarity on memory for face-name pairs
 
Peter C. Pantelis, University of Pennsylvania
Marieke Van Vugt, University of Pennsylvania
Michael Kahana, University of Pennsylvania
 
Subjects studied the names of faces with known coordinates in a four-dimensional similarity space (Wilson, Loffler, & Wilkinson, 2002), which was verified in a multi-dimensional scaling study (van Vugt, Sekuler, Wilson, & Kahana, in preparation). When subjects were cued with a face at test, the probability that they recalled the correct name diminished in an approximately linear manner depending on how many faces in the study set were similar to the cue face, i.e. present within a small neighborhood radius in face space. Reaction time showed the inverse effect. Furthermore, intrusions were more likely to come from nearby positions in face space. We demonstrated corresponding effects in an associative recognition task.



Presenter:  Joonkoo Park
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
Sensorimotor Locus of Neuronal Buildup Activity in Monkey Lateral Intraparietal (LIP) Area During a Choice Reaction-Time Task
 
Joonkoo Park, University of Michigan, Ann Arbor
Jun Zhang, University of Michigan, Ann Arbor
 
Recent single neuron studies offer support for information accumulation type model with a threshold-crossing mechanism during simple choice RT tasks. In a previous study of random-dot motion-discrimination task (Roitman and Shadlen, 2002), it is shown that neuronal firing rate of the monkey's lateral intraparietal cortex (LIP) accumulates during each trial up until monkey's behavioral response, and the accumulation ("buildup") rate is monotonically related to the strength of the stimulus. Their data analysis, however, was unable to distinguish the sensorimotor role of the neurons during the stimulus-response association task. Here, we apply the technique of Locus Analysis (Zhang et al, 1997) to this data set in order to quantitatively characterize the processing "locus" of the neuronal activity along the sensorimotor continuum. Locus analysis provides an index of how a neuron's firing activity is differentially related to stimulus identification and to response preparation in a trial-specific manner. Our analysis shows that this so-called differential activity of the LIP neuronal population is essentially zero in the beginning of a trial, then increases and finally reaches peak right before the saccadic response is made. High differential activity is associated with motoric coding as opposed to sensory coding in LIP neurons. Further, differential activity peak is much tighter under response-locked analysis compared with stimulus-locked analysis, further supporting the claim that LIP buildup activity do not encode sensory information but rather encode intended movement. (Single cell data kindly made available by Michael Shadlen).



Presenter:  Amy Perfors
Presentation type:  Symposium
Presentation date/time:  7/28  10:55-11:20
 
Hierarchical phrase structure and poverty of the stimulus: A Bayesian approach
 
Amy Perfors, MIT
Joshua Tenenbaum, MIT
Terry Regier, University of Chicago
 
The Poverty of the Stimulus (PoS) argument holds that children do not receive enough evidence to infer the existence of core aspects of language, such as the dependence of linguistic rules on hierarchical phrase structure. We reevaluate one version of this argument using a Bayesian model of grammar induction, and show that a rational learner faced with typical child-directed input and without initial language-specific biases could learn this dependency. This enables the learner to master aspects of syntax, such as the auxiliary fronting rule in interrogative formation, even without having heard the sort of data often assumed to be necessary for learning (e.g., interrogatives containing an auxiliary in a relative clause in the subject NP).



Presenter:  Zygmunt Pizlo
Presentation type:  Talk
Presentation date/time:  7/26  9:00-9:25
 
Traveling Salesman Problem in real and VR space
 
Zygmunt Pizlo, Purdue University
Edward Carpenter, Purdue University
David Foldes, Purdue University
Emil Stefanov, Purdue University
Laura Arns, Purdue University
 
TSP on a Euclidean plane is solved quite well by humans when the stimulus is orthogonal to the line of sight. In such a case, the retinal image is identical to the stimulus up to an overall scaling factor. As a result, the projection from the stimulus to the retina does not change the TSP problem because the lengths of all tours are changed by the same factor. We tested subjects under conditions involving perspective projection, which does change a TSP problem. Tennis balls were arranged on a floor within 55' by 55' area. The number of tennis balls (cities) was 5, 10 and 20. There were 10 instances of each problem. The instances were generated in such a way that perspective projection changed the optimal tours substantially. The experiment on a real floor was replicated in a VR environment (Cave). Performance in real space was very close to that on a computer monitor. However, performance in VR showed greater variability. There was an indication that subjects who had experience with VR performed better. Finally, we tested subjects with TSP in 3D space. Points were rendered in a volume of a VR space. Performance with 3D TSP was an order of magnitude lower than that with 2D TSP, suggesting that 2D representations have special status in human problem solving.



Presenter:  Timothy Pleskac
Presentation type:  Talk
Presentation date/time:  7/26  15:10-3:35
 
A Dynamic and Stochastic Theory of Choice, Response Time, and Confidence
 
Timothy Pleskac, Indiana University
Jerome Busemeyer, Indiana University
 
The three most basic performance measures used in cognitive research are choice, decision time, and confidence. We present a diffusion model that accounts for all three variables using a common underlying process. The model uses a standard drift diffusion process to account for choice and decision time. To make a confidence judgment, we assume that evidence continues to accumulate after the choice. Judges then interrupt the process to categorize the accumulated evidence into a confidence rating. The fully specified model qualitatively accounts for the known relationships between all three variables. Besides the speed/accuracy trade-off, the model also correctly predicts that confidence increases with accuracy. Finally, it captures the two-fold relationship between confidence and decision time. On the one hand, during optional stopping tasks (where the respondent determines when to stop and decide), there is an inverse relationship between the time taken and the degree of confidence expressed in the choice (Henmon, 1911). On the other hand, during externally controlled stopping tasks (where the experimenter determines when to stop and decide) then the longer people are given to make a decision the more confident they become (Irwin et al., 1956). Quantitatively we will evaluate both the ability of the diffusion model and the Poisson model using Vicker's (1979) balance-of-evidence hypothesis to capture accuracy, response time distributions, and confidence rating distributions from a statement verification task. Theoretical implications and applications of the model to a variety of basic and applied tasks will be discussed.



Presenter:  Sean Polyn
Presentation type:  Talk
Presentation date/time:  7/26  10:55-11:20
 
The interaction of task context and temporal context in memory search
 
Sean Polyn, University of Pennsylvania
Kenneth Norman, Princeton University
Michael Kahana, University of Pennsylvania
 
The principle of encoding specificity states that memory retrieval will be most successful when the memory cues available at retrieval match those present at study. Here, we investigate the ability of the memory system to alter the set of available cues on the fly during the search process, by retrieving and maintaining contextual details associated with the studied items. Thus, by retrieving context, the human memory system returns to the state it had during encoding, facilitating further recalls. We investigated this dynamic in a series of free-recall experiments in which encoding task context varied within a list. The encoding tasks included pleasantness, size, and animacy judgements. Analyses of recall transitions and serial-position effects suggest that the context of the encoding task exerted a strong influence on the organization of memory. Specifically, subjects showed a strong tendency to cluster items according to encoding task, and this task clustering shows an interaction with temporal clustering (the tendency to successively recall items studied nearby in time). Here, we explore the dynamics of a model of memory search that incorporates features of Howard and Kahana's Temporal Context Model (TCM), as well as a model of task context developed by Cohen and colleagues. This model explains the interaction between task and temporal clustering by the simultaneous use of the two context representations to probe memory.



Presenter:  Michael Pratte
Presentation type:  Talk
Presentation date/time:  7/27  13:15-1:40
 
Modeling participant and item effects in the theory of signal detection
 
Michael Pratte, University of Missouri - Columbia
Jeffrey Rouder, University of Missouri - Columbia
Richard Morey, University of Missouri - Columbia
 
Recognition memory has been conventionally modeled with the theory of signal detection by assuming that the memory strength of studied target words is increased over non-studied distractors. Previous research (e.g., Ratcliff, Sheu, & Gronlund, 1992; Glanzer, Kim, Hilford, & Adams, 1999) has indicated that study not only increases the mean memory strength of targets, but increases the standard deviation of their strength as well. We highlight a potential problem in these findings---analysis is predicated on aggregating data over items. Whereas these items may vary systematically (some are more memorable than others), there is unaccounted variance. We show how this variance distorts conventional measures leading to an asymptotic underestimation of mean strength (d') and an asymptotic overestimation of standard deviation. To provide for accurate estimation, we propose a Bayesian hierarchical model which simultaneously models participant and item effects without recourse to aggregation.



Presenter:  Braden Purcell
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
External and internal validity comparisons of three statistical analysis methods for sorting data using Munsell colors and personality traits
 
Braden Purcell, Miami University
Robin Thomas, Miami University
 
Discovering how people perceive objects such as faces, cars, foods, etc., is an important objective of researchers in marketing, clinical psychology, and cognitive science. Various methodologies exist for uncovering mental representations of items often modeled as spatial maps. One task that has been used extensively when large numbers of objects are considered asks individuals to sort objects into user-chosen categories according to their overall similarities. Different statistical strategies for analyzing sorting data have been proposed, but have not been directly compared for their relative abilities to uncover accurate spatial representations of the objects. Using Munsell colors in one task and personality traits in second task, we investigated three sorting analysis methods: homogeneity analysis (HOMALS), dissimilarity from correspondence matrices, and a method proposed by Bimler and Kirkland (2001) that used additional information from a subsequent merge task. All methods were assessed according to how well they recovered a known configuration established by other means (external validity), as well as how accurately each predicts the actual participant data (internal validity). For the colors, both external and internal validity measures suggested that the configuration obtained using the Bimler and Kirkland merge method was superior to that from the other two methods. For the traits, results were mixed in that different methods were superior along different indices. These results are not surprising given that traits are more complex categories as compared to simple perceptual colors, and may be better represented by sets of features rather than as points in a cognitive map.



Presenter:  Brendan P. Purdy
Presentation type:  Talk
Presentation date/time:  7/27  11:20-11:45
 
A context-free language for binary multinomial processing tree models
 
Brendan P. Purdy, UC Irvine
William Batchelder, Institute for Mathematical Behavioral Sciences, UC Irvine
 
This paper provides a new formalization for the class of binary multinomial processing tree (BMPT) models, and new theorems for the class are developed using the formalism. MPT models are a popular class of information processing models for categorical data in specific cognitive paradigms. They have a recursive structure that is productively described with the tools of formal language and computation theory. We provide a proof-theoretic axiomatization that characterizes BMPT models as strings in a context free language, and then we add model- theoretic axioms to interpret the strings as parameterized probabilistic models for categorical data. The language for BMPT models is related to the Dyck language, a well-studied context free language. Once BMPT models are viewed from the perspective of the Dyck language, a number of theoretical and computational results can be developed. We first look at the sub-class of BMPT models that satisfy a uniqueness condition, namely that they have unique categories and parameters. First, we give a complete enumeration of the models under the uniqueness condition. Second, we show when two such models are statistically equivalent. Third, we use the pushdown automaton associated with the Dyck language to partition the models and develop algorithms that compute the probability distribution functions (pdfs) for any given model. Lastly, we relax the uniqueness assumption and we modify the aforementioned algorithms to generate the pdfs for models under linear-order parameter constraints.



Presenter:  Vinayak Rao
Presentation type:  Talk
Presentation date/time:  7/26  11:20-11:45
 
Contextual retrieval in semantic memory: Building semantic spaces with TCM
 
Vinayak Rao, Syracuse University
Marc Howard, Syracuse University
 
The temporal context model was developed to describe episodically-formed associations between words presented in temporal proximity. By allowing words to retrieve and update the previous contexts in which they were presented, the model can learn higher-order co-occurrence information as well. We extend the model to form stable representations that capture latent relations between presented words. We test the model on artificial texts with simple known generating structures, as well as naturally-occurring text. In particular, we study semantic information learned from the TASA corpus, where the model's performance is comparable to that of LSA on standard synonym tests. Taken with the temporal context model's ability to describe episodic association, this suggests that contextual encoding and retrieval are fundamental computations common to episodic and semantic memory.



Presenter:  Roger Ratcliff
Presentation type:  Talk
Presentation date/time:  7/26  13:15-1:40
 
Evaluating the EZ Fitting Method for the Diffusion Model
 
Roger Ratcliff, Ohio State University
 
Wagenmakers, et al. (in press) claimed that the use of the diffusion model in experimental psychology has been less than widespread because of difficulty in fitting the model to data. They proposed a new method for fitting the model ("EZ") that is simpler than the standard chi-square method. Wagenmakers et al. also suggested that the EZ method can produce accurate parameter estimates in cases where the chi-square method would fail, specifically experimental conditions with small numbers of observations and conditions with accuracy near ceiling. I present a number of comparisons between the two methods: 1. Unlike the chi-square method, the EZ method is extremely sensitive to outlier RTs. 2. It is consistently less efficient in recovering most parameter values. 3. It produces estimates of parameter values that are highly variable (more variable than the chi-square method when the number of observations in a condition is small). 4. Small misspecifications can lead to errors in data interpretation. 5. The proposed tests for misspecification are not powerful enough when the number of observations in an experimental condition is small. I also present a comparison between EZ parameter estimates and chi-square estimates for a published experiment (Ratcliff, Thapar & McKoon, 2003). My conclusion is that the EZ method could be quite useful in exploration of parameter spaces, but should not be used when meaningful estimates of parameter values are needed.



Presenter:  Roger Ratcliff
Presentation type:  Talk
Presentation date/time:  7/27  14:45-3:10
 
Modeling Confidence Judgments in Recognition Memory
 
Roger Ratcliff, Ohio State University
Jeffrey Starns, Ohio State University
 
We have developed a model of confidence judgments in recognition memory that assumes that evidence for each confidence category is accumulated in a separate diffusion process. The model assumes that activity in each diffusion process cannot fall below zero and there is decay in the process (i.e., an OU diffusion process). The model makes predictions for both the accuracy and response time distributions for each confidence judgment. Stimulus information is assumed to be represented as a normal distribution of values on a familiarity scale, different distributions for old and new items. Confidence criteria are placed on this familiarity dimension and the rate of accumulation for each response category is determined by the area under the distribution between the confidence criteria. The model incorporates several identifiable sources of variability, variability within the decision process, familiarity, decision criteria, and nondecision components of processing across trials. This means that the standard interpretation of the z-ROC function is no longer valid. Deviations of the slope from unity reflects both decision criterion settings across confidence criteria as well as differences in familiarity distribution standard deviations. We present the results from experiments in which instructions to use the different categories are either "be accurate" or "spread responses equally across categories" (the usual instructions) and show how the latter lead to inconsistency in data and fits. We also discuss sequential effects.



Presenter:  Michel Regenwetter
Presentation type:  Talk
Presentation date/time:  7/28  10:55-11:20
 
Testing Transitivity of Preferences: A Status Report
 
Michel Regenwetter, University of Illinois at Urbana-Champaign
Jason Dana, University of Pennsylvania
Clintin Davis-Stober, University of Illinois at Urbana-Champaign
 
Testing deterministic decision making axioms against experimental choice data sends the researcher down a path fraught with conceptual, methodological and computational pitfalls. Of over a hundred papers related to preference (in)transitivity, that we know, nearly every paper falls victim to at least one, and most fall victim to several critical problems. These problems cast doubt on the validity of their conclusions. We provide a status report of past and present efforts to test transitivity of preference and we discuss an approach that avoids the various problems of past work in this area. This work is supported by the Air Force Office of Scientific Research under grant #FA 9550-05-1-0356 (PI:M.Regenwetter).



Presenter:  Cory Rieth
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
Classification images from noise only trials: A comparison between face and letter detection
 
Cory Rieth, University of California, San Diego
David Huber, University of California, San Diego
Hongchuan Zhang, University of California, San Diego
Kang Lee, University of Toronto
 
Reverse correlation techniques yield visual classification images by combining large numbers of noise images based on neural or behavioral responses. These responses are commonly collected while viewing a combination of noise and target. In the reported studies, we produced pure top-down processing classification images by using noise only images while participants engaged in detection of either one of many different possible faces or one of many different possible letters. Classification images based on pixel noise require many thousands of separate noise trials. Instead, we sought evidence of higher level more complex characteristics by creating noise images that combined randomly placed Gaussian's "blobs" at varying spatial scales. Both face and letter detection was achieved with exactly the same sequence of 480 noise images in separate experiments. This was done such that any differences would only reflect the nature of the detection task. The resultant face and letter classification images differed in spatial frequency, laterality, and spatial heterogeneity (i.e., number of distinct regions). Because the noise images were briefly presented with insufficient time for saccades, these laterality effects may relate to cortical hemisphere specialization.



Presenter:  Terri Robinett
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Mathematical Support for Kohlberg's Moral Stage Theory: Applying a Model of Hierarchical Complexity and Rasch Analysis
 
Terri Robinett, College of the Desert
 
Studies based on Kohlberg's moral development stages have concluded that political liberals tend to operate within higher principled stages of moral reasoning while conservatives operate at lower conventional levels. Critics argue that Kohlberg's concept of developmental stage is invalid as well as the instruments used to measure it. In order to support the notion of developmental stages, this study utilized the Model of Hierarchical Complexity (MHC) to relate an individual's performance on multiple measures of moral reasoning to a mathematical order of hierarchical complexity. Moral dilemma test items from various standardized performance-based tests which are typically scored using predetermined subjective criteria were used. A mathematical order of hierarchical complexity was applied to each item and then the results were analyzed using the Rasch analysis. Overall results indicated that with a few specific exceptions, there was no relationship between the order of hierarchical complexity and political identification. These findings are in opposition to earlier studies however they do provide objective, empirical support for Kohlberg's moral stage theory. Future research is needed to compare the results of traditional moral reasoning tests with the hierarchical methodology used in this study. Hierarchical complexity may prove to be a valuable tool to objectively measure individual differences in other realms of social science.



Presenter:  Sara Ross
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
The Fractal Nature of Stage Change: A Model and Transition Data
 
Sara Ross, ARINA, Inc.; Dare Institute
 
The Model of Hierarchical Complexity (MHC) is a discrete state model that posits a series of nominal-scale orders of increasing task complexity. The Model provides a mathematical expression of each order of complexity. Uses of the Model to date include the behaviors of individual animals and humans, organizations, and social institutions, and indicate its applicability to tasks at various fractal scales, including those of time and social complexity. Dialectical processes of transition from any order to the next higher order are comprised of a sequence of discrete-state transition steps. The sequence of tasks in transition is identical from order to order. Thus, the transition steps pattern is also fractal. The transition steps result in increasingly less partial organization of combinations of elements at the next order and at the same time, those elements increase in complexity from one order to another, with fractal similarities to the overall model. This paper uses the MHC's mathematical expressions of the orders as the foundation for this first description of the fractal nature of the transition steps. It includes scored examples of transitions at several different scales of time, social complexity, and hierarchical complexity. These will show scales, steps, and transition data. It invites collaboration to develop the mathematical models of these fractal transition processes. It indicates an extension of psychophysics and suggests applications to decision theory, problem solving, learning models, time series analyses, game theory, information processing, and other analyses (e.g., policy changes or political polling over time).



Presenter:  Jeff Rouder
Presentation type:  Symposium
Presentation date/time:  7/26  11:20-11:45
 
Using Dispersion in Bayesian Hierarchical Models
 
Jeff Rouder, University of Missouri
Dongchu Sun, University of Missouri
Paul Speckman, University of Missouri
Jun Lu, American University
 
Many nonlinear hierarchical models include linear model components. The Weibull, for example, is a suitable nonlinear model for response time. A linear models may be placed on the log of the scale parameter to account for person, item, and condition effects. We have advocated linear models that account for dispersion through the inclusion of additional noise terms, even when identifiability is dependent on the choice of the prior. We show how these additional noise terms simplify sampling, vastly reduce autocorrelation in MCMC outputs, and provide for convenient computation of Bayes factors.



Presenter:  Adam Sanborn
Presentation type:  Talk
Presentation date/time:  7/26  14:45-3:10
 
Exploring the Subjective Probability Distributions of Natural Categories
 
Adam Sanborn, Indiana University
Thomas Griffiths, University of California, Berkeley
Richard Shiffrin, Indiana University
 
Categories are central to cognition, reflecting our knowledge of the structure of the world, supporting inferences, and serving as the basic units of thought. The process used by people to group objects into categories has been extensively studied, but generally using training paradigms with artificial categories. Drawing on a correspondence between human choice behavior and a popular statistical algorithm, a Markov chain Monte Carlo (MCMC) method is used to explore the subjective probability distributions of natural categories. This method does not make any distributional assumptions and allows arbitrary category structures to be determined. In addition, the MCMC method is combined with multidimensional scaling in order to describe natural category structures in a psychologically-relevant similarity space. We apply this method to determine the subjective probability distributions of basic-level categories of fruits and vegetables. These empirical distributions are compared to the characteristic subjective probability distributions of standard models of categorization.



Presenter:  Tadamasa Sawada
Presentation type:  Talk
Presentation date/time:  7/27  13:15-1:40
 
Detecting symmetry in perspective images
 
Tadamasa Sawada, Purdue University
Zygmunt Pizlo, Purdue University
 
Symmetric objects rarely produce symmetric retinal images. However, human observers have little difficulty in discriminating whether a given retinal image was produced by a symmetric or an asymmetric object. We tested perception of planar (2D) symmetric objects when the objects were slanted in depth. First, we compared performance in detecting symmetry with dotted patterns to that with polygons. Symmetry could be detected reliably with polygons, but not with dotted patterns. Second, we showed that symmetry detection is improved when the projected symmetry axis or symmetry lines (the features representing the symmetry of the pattern itself) are known to the subject, but not when the axis of rotation (the feature representing the 3D viewing direction) is known. Third, we compared performance with orthographic images and that with perspective images, and found that performance with orthographic images is better. Finally, we tested reconstruction of symmetric polygons from orthographic images. Based on these results, we propose a computational model, which measures the asymmetry of the presented polygon based on its single orthographic or perspective image. Performance of the model is similar to the performance of human subjects.



Presenter:  Verena Schmittmann
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
Flexibility and generalizability of learning models embodying both all-or-none and incremental learning assumptions
 
Verena Schmittmann, University of Amsterdam
Ingmar Visser, University of Amsterdam
Maartje Raijmakers, University of Amsterdam
William Batchelder, University of California, Irvine
 
Several simple mathematical learning models have been proposed that combine aspects of the simple basic all-or-none model and linear operator learning models for two response alternatives. Three of these hybrid models are the insight model, the two-phase model, and the random-trial incremental model. Each of these models formalizes different assumptions about the learning process, but all three nest the two basic models. All of these models have been designed to predict error-success sequences terminating in a criterion run of successes in a simple learning task with feedback. Most statistical inference for these models has been based on the assumption that all error-success sequences are probabilistically determined by the same set of parameters. Although this assumption excludes individual differences in learning ability, great individual differences in performance are predicted. In the presence of individual differences in learning ability, e.g., in a mixture model where each mixture component consists of one of the models with its own set of parameter values, the flexibility of the models is enhanced even further. In this study, the basic models, the hybrid models and the individual difference models were compared using both analytical and simulation methodologies. In the cross-fitting simulation study, trade-off between parameter values, model parameter recovery, and the performance of several indices for model selection were examined under different sample size and parameter value conditions.



Presenter:  Richard Schweickert
Presentation type:  Talk
Presentation date/time:  7/27  10:30-10:55
 
Factors Selectively Influencing Processes in Multinomial Processing Trees
 
Richard Schweickert, Purdue University
Shengbao Chen, Purdue University
 
Suppose a memory task is carried out by executing processes in a multinomial processing tree, with two categories of terminal vertices, say correct and incorrect. Suppose changing the level of one experimental factor changes the probabilities on edges descending from a single vertex, and changing the level of another experimental factor changes the probabilities on edges descending from another single vertex. In earlier work, we showed that if each factor changes probabilities on exactly two edges, then the multinomial processing tree underlying the task is equivalent to one of two relatively simple trees. Here we allow each factor to change the probabilities on any number of edges. We show that only one more tree need be considered. That is, the tree underlying the task is equivalent to either one of the two trees presented earlier, or to a new relatively simple tree.



Presenter:  John Serences
Presentation type:  Symposium
Presentation date/time:  7/26  15:10-3:35
 
Multivariate fMRI Investigations of Attention and Perceptual Decision Making
 
John Serences, UC Irvine
 
Visual features can be grouped into superordinate categories such as motion or color, as well as subordinate categories such as a specific direction of motion. Traditionally, fMRI studies have been restricted to the superordinate level of analysis because the BOLD response is spatially imprecise with respect to the topology of subordinate-level selectivity within visual cortex. For example, a 180? array of motion-selective columns in area MT are contained within 0.5mm of cortex. Functional MRI voxels are large in comparison; however, if a preponderance of neurons preferring a particular feature are sampled within a voxel, then that voxel will exhibit a small feature-selective response bias. By considering the pattern of activity across many weakly selective voxels, it is possible to predict the feature that an observer is viewing. I describe two studies illustrating the utility of applying multivariate methods to the study of human perception and cognition. First, we predicted the attentional state of human observers as they monitored one of two overlapping directions of motion. Our analysis revealed that feature-specific attentional modulations spread to blank regions of the display, presumably increasing sensitivity to behaviorally relevant features across the visual field. In a second example, we show that the pattern of activation in nearly all visual areas discriminates the direction of a field of moving dots. However, some midlevel areas also discriminate the 'perceived' direction of motion even when presented with an ambiguous stimulus, suggesting that these regions support perception in the absence of sensory evidence.



Presenter:  Cyrus Shaoul
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Optimizing HAL parameter space for predicting linguistic behavior.
 
Cyrus Shaoul, University of Alberta
Chris Westbury, University of Alberta
 
HAL (Hyperspace Analog to Language) is a high-dimensional model of semantic space that uses the global co-occurence frequency of words in a large corpus of text as the basis for its representation of semantic memory. We have explored the parameter space of the model to find an optimized set of parameters (such as window size, and weighting function) . These new parameters give us measures of semantic density that predict behavioral measures in human beings better than the original HAL parameters.



Presenter:  Wayne Shebilske
Presentation type:  Symposium
Presentation date/time:  7/27  11:20-11:45
 
A Model-Driven Instructional Strategy: The Benchmarked Experiential System for Training (BEST)
 
Georgiy Levchuk, Aptima
Wayne Shebilske, Wright State University
Jared Freeman, Aptima
 
Military and commercial simulation systems are often used in team training, but they are not training systems in a formal sense. Simulators present rich practice opportunities, but generally do not ensure that these are administered in an instructionally efficient manner. We describe a POMDP model that selects, from a large library, the training scenario that will most efficiently advance teams towards expertise given its performance on its previous scenario. Two experiments demonstrate that this model-driven instructional strategy reliably increases team performance on far transfer tasks, relative to a control strategy, hierarchical part task training. We speculate on the cause of this effect and propose research to explore and exploit these effects in military training.



Presenter:  Richard Shiffrin
Presentation type:  Talk
Presentation date/time:  7/28  9:25-9:50
 
Model selection with few observations
 
Andrew Cohen, University of Massachusetts, Amherst
Adam Sanborn, Indiana University
Richard Shiffrin, Indiana University
 
Analyzing the data of individuals has several advantages over analyzing the data combined across the individuals (we term this 'group analysis'): Grouping can distort the form of data, and different individuals might perform the task using different processes and parameters. We investigate the possibility that group analysis might still be useful, and might even outperform individual analysis, when there is a large amount of measurement noise, such as might occur for an experiment with few trials per condition. We employ a simulation technique in which data are generated from each of two known models (e.g., the exponential and power laws of forgetting), each with parameter variation across simulated individuals. We examine how well the generating model and its competitor each fare in fitting (both sets of) the data, using both individual and group analysis. To assess accuracy, we need a method for selecting a model based on the fits. We compare the results of selecting models based on maximum likelihood, AIC, BIC, minimum description length, Bayesian model selection, cross validation, generalization, and predictive validation. We looked at a wide range of comparison models, subject parameters, number of subjects, and trials per condition, and found cases where group analysis is a more accurate method of model selection.



Presenter:  Noah Silbert
Presentation type:  Poster
Presentation date/time:  7/27  17:30-6:30
 
Independence in perception of complex non-speech sounds
 
Noah Silbert, Indiana University
James Townsend, Indiana University
Jennifer Lentz, Indiana University
 
Little, if any, work has explicitly addressed independence in the perception of complex sounds. General Recognition Theory provides a powerful framework in which to address such issues. Two experiments were carried out to test within-stimulus, between-stimulus, and decision-related notions of independence in two stimulus sets. One set consists of broadband noise stimuli varying in frequency and duration, the other harmonic tone complexes varying in fundamental frequency and spectral shape. Stimulus presentation likelihood was manipulated to enable hierarchical model fitting, which in turn provides powerful new tests of independence. The model fitting analyses indicate that decision-related independence (decisional separability) holds for all participants with each stimulus set, that within-stimulus independence (perceptual independence) holds and between-stimuli independence (perceptual separability) fails for all participants with the noise stimuli, and that both perceptual independence and perceptual separability fail for participants with the harmonic stimuli.



Presenter:  Cole Smith
Presentation type:  Symposium
Presentation date/time:  7/27  15:10-3:35
 
Network Design Under Varying Enemy Behaviors
 
Cole Smith, University of Florida
Fransisca Sudargho, University of Arizona
Churlzu Lim, University of North Carolina-Charlotte
 
We examine the problem of building or fortifying a network to defend against enemy attack scenarios. In particular, we examine the case in which an enemy can destroy any portion of any arc that a designer constructs on the network, subject to some interdiction budget. This problem takes the form of a three-level, two-player game, in which the designer acts first to construct a network and transmit an initial set of flows through the network. The enemy acts next to destroy a set of constructed arcs in the designer's network, and the designer acts last to transmit a final set of flows in the network. Most studies of this nature assume that the enemy will act optimally; however, in real-world scenarios one cannot necessarily assume rationality on the part of the enemy. Hence, we prescribe network design principals for three different profiles of enemy action: an enemy destroying arcs based on capacities, based on initial flows, or acting optimally to minimize our maximum profits obtained from transmitting flows.



Presenter:  Jared Smith
Presentation type:  Talk
Presentation date/time:  7/28  9:50-10:15
 
Effects of Misspecification in Hierarchical modeling
 
Jared Smith, University of California, Irvine
 
A number of recent papers within the cognitive modeling literature have proposed the application of hierarchical modeling to take into account subject variability (e.g., DeCarlo, 2002, Psychological Review; Karabatsos & Batchelder, 2003 Psychometrika; Klauer, 2006, Psychometrika; Lee & Webb, 2005, Psychonomic Bulletin & Review; Rouder & Lu, 2005, Psychometrika; Rouder, Sun, Speckman, Lu, & Zhou, 2003, Psychonomic Bulletin & Review). One limitation of these methods is that they require making approximately accurate hierarchical assumptions concerning the distribution of a model's parameters over successive subject observations. These assumptions may be misspecified even if the base model is correctly specified. The purpose of this paper is to examine the consequences of misspecification at the hierarchical level with simulations and analysis of existing data sets. In particular, it is demonstrated that the application of finite mixture models may lead to deceptive results if the underlying distribution of the parameters is continuous and unimodal. Moreover, results may be problematic in the case where individual differences are modeled with unimodal distributions, when the true data generating distribution is multimodal (e.g. a finite mixture models). It is argued that hierarchical modeling provides a powerful method to account for individual differences, but that researchers should take care in interpreting the fitted hierarchical distributions without appropriate model checking, especially for the hierarchical assumptions.



Presenter:  Brian Stankiewicz
Presentation type:  Symposium
Presentation date/time:  7/27  10:30-10:55
 
Using Partially Observable Markov Decision Processes to Understand Human Sequential Decision Making with Uncertainty
 
Brian Stankiewicz, University of Texas, Austin
 
It appears that humans possess the remarkable ability to make good and rapid decisions when they have incomplete knowledge (uncertainty) and when they make these decisions in sequence. Although we know this anecdotally, in order to fully understand our ability to do this, one would like to compare human performance to the theoretical optimal performance. Our lab has conducted a series of studies investigating human sequential decision making with uncertainty using Partially Observable Markov Decision Processes (POMDP) as a benchmark for human behavior (Stankiewicz, Legge, Mansfield, and Schlicht, 2006; Stankiewicz, Under Review). We have used this benchmark to measure the decision efficiency (human performance relative to the optimal performance) for a variety of tasks. These studies have illuminated some interesting findings. First, in a variety of tasks we find that people are approximately 50% efficient when they have to make all of the calculations and decisions on their own. Furthermore, we find that providing a memory aid that makes explicit the participant's previous actions and observations does not significantly improve performance. However, information about the likelihood of each state being true improves participant's performance to approximately 95% efficiency. In this talk I will describe the POMDP framework and how it has been used to elucidate human the limitations in human sequential decision making performance in these studies.



Presenter:  Ulrike Stege
Presentation type:  Symposium
Presentation date/time:  7/27  14:05-2:30
 
On Computational Models for Updating Rational Belief Systems
 
Ulrike Stege, University of Victoria
 
Theories for belief revision have been studied by philosophers and logicians as well as cognitive scientists and social scientists. While a lot of work in the former group investigates how people should change their minds, the latter asks how people actually do change their minds. A challenge in this active area of research is to bridge the gap between philosophical approaches and empirical evidence in the field of cognitive science and experimental psychology. In this talk, we concentrate on the problem of belief revision in rational belief systems. Such a system is a coherent theory of sentences connecting propositional and transitional beliefs. Updates of rational belief systems come in two flavors: the adding of a sentence (with or without beliefs), and the removal of a belief. We consider philosophical theories modeling rational belief revision and propose, using tools from complexity theory, how to gather evidence to support or refute such a model.



Presenter:  Mark Steyvers
Presentation type:  Symposium
Presentation date/time:  7/28  11:20-11:45
 
Google and the mind: Predicting fluency with PageRank
 
Mark Steyvers, UC Irvine
Thomas Griffiths, UC Berkeley
Alana Firl, Brown University
 
If human cognition approximates optimal solutions to the computational problems posed by our environment, then we should expect to find correspondences between human behavior and that of other systems that successfully solve similar problems. Human memory and internet search engines face a shared computational problem, needing to retrieve stored pieces of information in response to a query. Consequently, we explore whether they employ similar solutions, testing whether we can predict human performance on a fluency task using PageRank, a component of the Google search engine. In this task, people are shown a letter of the alphabet and asked to name the first word that comes to mind beginning with that letter. We show that PageRank, computed on a semantic network constructed from word association data, outperforms word frequency and the number of words for which a word is named as an associate as a predictor of the words that people produce in this task. We identify two simple process models that could support this apparent correspondence between human memory and internet search, and relate our results to previous rational models of memory.



Presenter:  Robin Thomas
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Exploring different decision models of the uncertainty discrimination paradigm within the signal detection framework
 
Robin Thomas, Miami University
Lynn Olzak, Miami University
Jordan Wagge, Miami University
 
An uncertainty discrimination paradigm can be used to assess the independent or non-independent processing of two components in a compound stimulus. In certainty conditions, the observer knows in which component the cue to discrimination will appear. In the uncertainty condition, it can appear in either. Previous work purported to derive the performance bounds on d' for detection assuming optimality and independence. The present analysis clarifies the optimality predictions and presents predictions for discrimination performance made by alternative plausible decision models which can serve as a comparison to the predictions of the ideal observer.



Presenter:  Xing Tian
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Geometric measures in electrophysiology: Spatial similarity and response magnitude
 
Xing Tian, University of Maryland
David Huber, University of California, San Diego
 
Sensor selection is typically used in electrophysiological studies but this practice cannot differentiate between changes in the distribution of neural sources versus changes in the magnitude of neural sources. This problem is further complicated by subject averaging despite sizable individual differences in the distribution of the neural sources. Using data from all sensors, we present simple geometric techniques that 1) normalize against individual differences by comparison with a standard response for each individual; 2) compare the similarity of spatial patterns in different conditions (geometric angle) to ascertain whether the distribution of neural sources is different; and 3) compare the response magnitude between conditions which are sufficiently similar (geometric projection). Although precise cortical localization remains intractable, these techniques are easy to calculate, relatively assumption free, and yield the important psychological measures of similarity and response magnitude.



Presenter:  Marieke Van Vugt
Presentation type:  Talk
Presentation date/time:  7/27  15:10-3:35
 
Distinct electrophysiological correlates of proactive and similarity-based interference in visual working memory
 
Marieke Van Vugt, University of Pennsylvania
Robert Sekuler, Brandeis University
Hugh Wilson, York University
Michael Kahana, University of Pennsylvania
 
We investigated the electrophysiological correlates of proactive interference and similarity-based interference in visual working memory for gratings and faces. Using a multivariate approach we assessed the joint effects of reaction time and interference on oscillatory power. These analyses revealed significant and distinct electrophysiological correlates of proactive interference and similarity-based interference. For faces, trials with high proactive interference yielded increased occipital gamma oscillations. Gratings did not exhibit behavioral or electrophysiological proactive interference effects, but an electrophysiological correlate of lag instead. Trials with high similarity-based interference yielded decreased low frequency (2-8 Hz) oscillations for both stimulus types. The marked differences in the electrophysiological correlates of the two types of interference, even when controlling for accuracy and RT differences, lends support to theories that posit separate informational dimensions for temporal and similarity based information (e.g., Brown, Neath, and Chater (in preparation)).



Presenter:  Trish Van Zandt
Presentation type:  Talk
Presentation date/time:  7/27  14:05-2:30
 
Temporal Contexts in Choice Response Time
 
Trish Van Zandt, Ohio State University
Mari Jones, Ohio State University
 
The theory that a simple choice among n alternatives occurs as a gradual accumulation of "evidence" over time is now widely accepted and has received support from neurophysiological studies. Mathematical models that represent this theory vary somewhat, but all assume that evidence can be represented as a stochastic process that terminates when the level of evidence exceeds a criterion. A signal-detection theory "front end," parameters of which are determined by stimulus factors, provides variability in the rates of evidence accumulation. We will show how temporal cues provided by the events in an experiment can affect the accumulation process. In particular, we will present data showing how subjects exploit task rhythm to improve their performance. We use a diffusion process as a model of the choice task, and Large and Jones' (1999) attentional entrainment model to modulate the parameters of the diffusion process. Together these two models can explain some of the effects we observe in our data, including the elimination of a speed-accuracy tradeoff.



Presenter:  Joachim Vandekerckhove
Presentation type:  Talk
Presentation date/time:  7/26  14:45-3:10
 
A Diffusion Model Account Of Practice In Lexical Decision
 
Joachim Vandekerckhove, University of Leuven
Gilles Dutilh, University of Amsterdam
Francis Tuerlinckx, University of Leuven
Eric-Jan Wagenmakers, University of Amsterdam
 
The Wiener process is a popular model for the simultaneous occurence of reaction times and responses. In particular, the Ratcliff Diffusion Model has recently garnered significant attention. However, in data sets with many conditions, the large number of parameters leads to estimation problems. With a fully explored example, we show how linear and nonlinear constraints of the RDM's parameters across conditions (or participants) do not merely reduce the number of parameters to be estimated, but also how they can be used to test substantive hypotheses when used in combination with model selection strategies. We use very common statistical modeling techniques to construct a "regression diffusion model". We use this model to investigate the effect of practice on reaction time and accuracy, using a large data set from a practice experiment with 25 sessions (10,000 trials total). Two participants were instructed to emphasize accuracy, the other two to emphasize speed. This instruction is expressed in the diffusion model parameters.



Presenter:  Edward Vul
Presentation type:  Talk
Presentation date/time:  7/26  14:05-2:30
 
Temporal selection is continuous and deterministic; Responses are probabilistic.
 
Edward Vul, BCS, MIT
Nancy Kanwisher, BCS, McGovern, MIT
 
Is attentional selection continuous or discrete? Is it deterministic or variable between trials? Is the distribution of responses across trials representative of selection on a given trial? Most cognitive psychology experiments measuring selection average responses across trials to determine the distribution of selected items. From these data, authors infer the properties of selection on any one trial. However, the assumptions underlying this inference may not be justified. We devise a novel experiment and analysis technique: multiple probes are used on each trial, allowing us to assess the degree of between-trial variability and within-trial spread of selection that contribute to the final distribution of reports. Our analyses show little, if any, variability between trials. Further, we define a model of an observer that deterministically selects many items to varying degrees on every trial, and randomly samples from this selected subset to produce reports. This model provides an excellent fit with the data. We conclude that selection of items from an RSVP stream is continuous (in that many letters are selected at once to varying degrees), that selection is deterministic (in that there is little or no variability in what is selected across trials), and that subjects make responses by sampling from the deterministically selected distribution.



Presenter:  Eric-Jan Wagenmakers
Presentation type:  Talk
Presentation date/time:  7/26  13:40-2:05
 
EZ does it?
 
Eric-Jan Wagenmakers, University of Amsterdam
Raoul Grasman, University of Amsterdam
Conor Dolan, University of Amsterdam
Han Van der Maas, University of Amsterdam
 
The Ratcliff diffusion model has many advantages over the traditional approach to response time analysis. Unfortunately, computational challenges have thus far prevented the model from gaining widespread use among experimental psychologists. In order to popularize the Ratcliff diffusion model, Wagenmakers, van der Maas, and Grasman (in press) developed a simplified, EZ version. The EZ-diffusion model requires no fitting, but instead simply transforms RT mean, RT variance, and percentage correct to drift rate, boundary separation, and nondecision time. This presentation will discuss the current limitations of the EZ model and how some of these limitations can be successfully addressed.



Presenter:  Eric-Jan Wagenmakers
Presentation type:  New Investigator Speech
Presentation date/time:  7/27  16:30-5:30
 
Current Developments in the Modeling of Response Times and Accuracy Using the Ratcliff Diffusion Model
 
Eric-Jan Wagenmakers, University of Amsterdam
 
The Ratcliff diffusion model for simple two--choice decisions (e.g., Ratcliff, 1978, Ratcliff & Smith, 2004) has two outstanding advantages. First, the model generally provides an excellent fit to the observed data (i.e., response accuracy and the shape of RT distributions, both for correct and error responses). Second, the parameters of the model can be mapped to latent psychological processes such as the quality of information processing, cautiousness, and response bias. In recent years, these advantages of the Ratcliff diffusion model have become increasingly clear. Current advances in methodology allow all researchers to fit the diffusion model to data easily; new theoretical developments shed light on the optimality and possible neural underpinnings of the model; and recent applications to aging, IQ, and lexical decision highlight the added value of a diffusion model perspective on simple decision making.



Presenter:  Christoph Weidemann
Presentation type:  Talk
Presentation date/time:  7/27  13:40-2:05
 
Decisional noise as a source for violations of Signal Detection Theory
 
Christoph Weidemann, University of Pennsylvania
Shane Mueller, Indiana University & Klein Assoc. Div., ARA Inc.
 
Signal Detection Theory (SDT) assumes that responses are governed by perceptual noise and a flexible decision criterion, but recent criticisms suggest that these assumptions fundamentally misrepresent perceptual and decision processes (J. D. Balakrishnan, 1999). We hypothesize that these findings stem from decision noise: the inability to use deterministic response criteria. To test this hypothesis, we present a simple extension of SDT (the Decision Noise Model) and a new measure of perceptual noise that together help determine the locus of violations of SDT. Results from a new experiment provide unambiguous support for the decision noise hypothesis, and show that confidence ratings are especially unreliable and inconsistent. In addition, the Decision Noise Model successfully accounts for our own data as well as those from previously published studies. These findings suggest that decision noise may be important across a wide range of tasks and needs to be understood in order to accurately measure perceptual processes.



Presenter:  Corey White
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Emotional Processing and Dysphoria: A Diffusion Model Analysis
 
Corey White, Ohio State University
Roger Ratcliff, Ohio State University
Michael Vasey, Ohio State University
Gail McKoon, Ohio State University
 
Mood congruent memory in depressive states has been considered a robust effect (Blaney,1986), but findings have been somewhat equivocal. One possible reason is that commonly used choice reaction time tasks are especially sensitive to individual differences in response biases, which can add variability and obscure true differences. The diffusion model (Ratcliff, 1978) incorporates all aspects of the behavioral data from these tasks to separate the components of processing and control for response biases. We applied the diffusion model to data from lexical decision and recognition memory tasks and showed strong support for mood congruent memory: nondysphoric subjects had a better memory match for positively than negatively valenced words, but dysphoric subjects had no difference. This pattern was not apparent in the traditional analyses of reaction times or accuracy. Implications for using the diffusion model to better understand data from choice reaction time tasks are discussed.



Presenter:  Keith Worsley
Presentation type:  Symposium
Presentation date/time:  7/26  15:35-4:00
 
Detecting Sparse Connectivity: The 'Bubbles' Task in the fMRI Scanner
 
Keith Worsley, McGill University
Fraser Smith, University of Glasgow
Philippe Schyns, University of Glasgow
 
We are interested in the general problem of detecting sparse connectivity, or high correlation, between pairs of pixels or voxels in two sets of images. To do this, we set a threshold on the correlations that controls the false positive rate, which we approximate using new results in random field theory. We illustrate this using data from an fMRI experiment using the 'bubbles' task. In this experiment, the subject is asked to discriminate between images that are revealed only through a random set of small windows or 'bubbles'. We are interested in which parts of the image are used in successful discrimination, and which parts of the brain are involved in this task.



Presenter:  Johanna Xi
Presentation type:  Talk
Presentation date/time:  7/28  9:50-10:15
 
Zipf Distributions of Characters in Dreams
 
Johanna Xi, Purdue University
Richard Schweickert, Purdue University
 
In many social interactions the frequencies with which people are involved follows a Zipf distribution, sometimes called a power law. For example, a person does not receive the same number of e-mail messages from everyone; the frequencies of e-mail messages tend to follow a Zipf distribution. We report evidence that the frequencies with which characters appear in dreams follow a Zipf distribution. Further, for one dreamer with a large enough number of dreams to work with, the conditional frequencies of characters appearing in a dream, given that a particular character appears in the dream, has itself Zipf distribution.



Presenter:  Jing Xu
Presentation type:  Talk
Presentation date/time:  7/26  15:10-3:35
 
A Bayesian Analysis of Serial Reproduction
 
Jing Xu, UC Berkeley
Thomas Griffiths, UC Berkeley
 
Bartlett (1932) explored the consequences of "serial reproduction" of information, in which one participant's reconstruction of a stimulus from memory becomes the stimulus seen by the next participant. These experiments were done using relatively uncontrolled stimuli such as pictures and stories, but suggested that serial reproduction could reveal the biases inherent in memory. We analyze serial reproduction for simple one-dimensional stimuli assumed to be drawn from a category. When people reconstruct these stimuli, they are influenced by the structure of the category. Huttenlocher, Hedges, and Vevea (2000) proposed that this effect can be modeled as a Bayesian inference, in which people combine the inexact fine-grained stimulus information with category information to achieve higher accuracy. We show that if this is the case, serial reproduction can be modeled as a autoregressive time-series, with a predictable trajectory and stationary distribution. Within the same theoretical framework, we also formally analyze how the convergence rate and stationary distribution of this process are influenced by different category distributions, perceptual noise, and different types of response behavior. Our analyses provide a formal justification for the idea that serial reproduction reflects memory biases.



Presenter:  Cheng-Ta Yang
Presentation type:  Talk
Presentation date/time:  7/26  10:55-11:20
 
The decision process in change detection
 
Cheng-Ta Yang, Department of Psychology, National Taiwan University, Taiwan
Yung-Fong Hsu, Department of Psychology, National Taiwan University, Taiwan
Yei-Yu Yeh, Department of Psychology, National Taiwan University, Taiwan
 
Change detection is a fundamental process in human visual perception. Detecting an object change involves a feature-by-feature comparison as each object consists of multiple features. We used a double factorial design developed by Townsend and Nozawa (1995) to investigate the process architecture, stopping rule, and capacity limitation in detecting changes of different features in an object. Observers were required to perform a change detection task. Changes in luminance and orientation were independently manipulated with two levels of ambiguity (ambiguous/ unambiguous). Reaction times in detecting changes of features in the redundant-target condition were faster than those in the single-target condition. Moreover, we computed the mean interaction contrast of reaction times and the interaction contrasts of survival function of reaction time distributions in the redundant-target condition. The results supported either a parallel self-terminating or a coactive process in change detection. Change detection is processed with either unlimited capacity or supercapacity. Adding a change signal in the other attribute can improve the detection performance. This study provides a direct test to show how different features are processed to contribute to the decision process in change detection. Keyword: change detection, double factorial design



Presenter:  Jack Yellott
Presentation type:  Talk
Presentation date/time:  7/27  13:40-2:05
 
Precorrecting spatial phase reversals in visual stimuli destined for defocus
 
Jack Yellott, University of California, Irvine
 
The main effect of optical defocus in the eye is always a decrease in retinal image contrast, but when the pointspread function has sharp edges (as it does for defocus levels above 1 diopter), defocus also produces frequency-specific spatial contrast sign-reversals (e.g., 1 + cos(fx) becomes 1–cos(fx)). Such "spurious resolution" distortions occur whenever the optical transfer function imposed by defocus creates a half-cycle phase shift (e.g., 1+cos(fx) becomes 1+cos(fx+π) = 1-cos(fx)). Correcting such phase reversal errors computationally in simulated out-of-focus retinal images (e.g., of printed text) can produce decisive improvements in stimulus recognizability. Mathematically, the same correction can be performed in advance (e.g., in a video display) by altering the phase spectra of visual stimuli to anticipate phase reversals that will be produced by subsequent defocus (e.g., to precorrect phase reversal at frequency f, replace stimulus 1+cos(fx) with 1+cos(fx-π)). In practice this works—e.g., a phase-precorrected point stimulus viewed out-of-focus does appear as a small bright spot on a uniform background. But in comparison to the dramatic improvement produced by phase-correcting images after defocus, pre-defocus phase correction generally produces a post-defocus retinal image with disappointingly low contrast. Analysis to be reported here shows why this is so, and allows one to estimate that peak contrast in the defocused image of a phase-precorrected stimulus can never be greater than around 25%.



Presenter:  Michiko Yoshida
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
Modeling children's and adults' prosodic-cue perception for compound word ambiguity resolution
 
Michiko Yoshida, University of Texas at Dallas
Willam F. Katz, University of Texas at Dallas
Steven Henley, Martingale Research Corporation
Richard Golden, University of Texas at Dallas
 
Melodic and rhythmic properties of speech correlate with grammatical structures such as phrase, and word boundaries. Listeners use prosodic information to identify syntactic units during speech perception. This processing can be characterized by a Fuzzy Logic Model of Perception (FLMP) (Massaro, 1987). Since FLMP models may be viewed within a logistic regression modeling framework (e.g., Crowther, Batchelder, & Hu, 1995), the current study used logistic regression models to investigate how the trading relationship between pitch and durational cues develops between children (ages 5 and 7) and adults. A listening experiment was conducted in which subjects interpreted word strings as "sun, flowerpot" or "sunflower, pot", depending on prosody. The audio stimuli for this two-alternative forced-choice task were created using re-synthesized speech having three levels of pitch and five levels of durational patterns. The data are being modeled as follows: Each level of pitch and durational will be treated as independent, binary variables. Pitch x Duration interactions and a constant will be assumed. Each possible combination of these terms will be tested against the obtained data. The best models for each age group will be selected based on the Generalized Akaike Information Criterion (GAIC; a generalization of AIC robust to model misspecification), as well as constraints concerning semantic interpretability and multicollinearity reduction. Age differences are predicted for the model structures and estimated coefficients. The results will be interpreted with respect to theories of cue-trading and a developmental cue-weighting shift in the use of prosodic cues for disambiguation.



Presenter:  Chen Yu
Presentation type:  Symposium
Presentation date/time:  7/28  9:50-10:15
 
Hypothesis Testing and Associative Learning in Cross-Situational Word Learning: Are They One and the Same?
 
Chen Yu, Indiana University
Linda Smith, Indiana University
Krystal Klein, Indiana University
Richard Shiffrin, Indiana University
 
Recent studies (e.g. Yu & Smith, in press; Smith & Yu, submitted) show that both adults and young children possess powerful statistical computation capabilities -- they can infer the referent of a word from highly ambiguous contexts involving many words and many referents. This paper goes beyond demonstrating empirical behavioral evidence -- we seek to systematically investigate the nature of the underlying learning mechanisms. Toward this goal, we propose and implement a set of computational models based on three mechanisms: (1) hypothesis testing; (2) dumb associative learning; and (3) advanced associative learning. By applying these models to the same materials used in learning studies with adults and children, we first conclude that all the models can fit behavioral data reasonably well. The implication is that these mechanisms -- despite their seeming difference -- may be fundamentally (or formally) the same. In light of this, we design and conduct as a series of simulation studies in which we systematically manipulate, across three models, learning input, learning parameters, and decision-making procedures at testing. Our simulation results suggest a formal unified view of learning principles that is based on the shared ground between three mechanisms. By doing so, we argue that the traditional controversy between hypothesis testing and associative learning as two distinct learning machineries may not exist, and the exploration of learning mechanisms within such a unified view may reveal new insights about learning processes that fall between these two classic extremes.



Presenter:  Matthew Zeigenfuse
Presentation type:  Poster
Presentation date/time:  7/26  17:30-6:30
 
A Bayesian graphical modeling approach to additive clustering
 
Matthew Zeigenfuse, University of California, Irvine
Michael Lee, University of California, Irvine
 
A new algorithm for learning featural representations from similarity data is proposed. This algorithm infers models for a given number of features by numerically sampling the posterior distribution of a Bayesian model of similarity data, and applies a Bayesian model selection approach to choosing the appropriate number of features. The approach is demonstrated an experiment involving similarity judgments of circles, squares, and triangles colored red, green, and blue.



Presenter:  Jun Zhang
Presentation type:  Talk
Presentation date/time:  7/26  9:50-10:15
 
Causal Power: A New Look from Signal-Detection Framework
 
Jun Zhang, University of Michigan
 
P. Chang (1997) put forward an analysis for the power of a cause (as an event) to either generate an effect (another event) in its presence, or prevent the effect in its absence; they are known as "generative cause" and "preventive cause", respectively. Here we invoke signal-detection framework (specifically, the low-threshold and high-threshold versions of SDT) and derive the same formulae given in Chang (1997) for generative and preventive power of a cause. This viewpoint allows a refinement of the notion of causal power, i.e., the power of a generative cause c of an effect e is interpreted as the power of c to cause the occurrence of a sufficient condition for e, and the power of a preventive cause c of e as the power of c to prevent the occurrence of a necessary condition for e. It also follows that generative power and preventive power are two fundamentally independent attributes, quantifying sufficiency and necessity, of an event as a cause for another event in the causal chain.