Annual Interdisciplinary Conference, Jackson, WY

                           February 2-7, 2003

                                ABSTRACTS

================================================================================

John Antrobus
CUNY

Word Recognition and Learning: Representation and Process, in Repetition Priming
Authors: John S. Antrobus, Martin Duff, Yusuke Shono, Bala Sundaram, Reza
  Farahani, and Sevda Numanbayraktaroglu

  Recognizing a word (or object) is accomplished by a network of multifunctional
brain structures.  Not only do these structures recognize, but they can also
learn, and they bias their functional structure to maximize the efficiency of
subsequent recognition process.  In the latter case, they make use of
context-based, statistical dependencies among object features to reduce
processing time, without sacrificing accuracy.  Simply put, given the sensory
input, and the context in which the word is presented, the recognition system
estimates the likelihood that the target is a particular word, and whether the
word was recently recognized.  These processes are essential to efficient
information processing, and constitute the basis for working memory. 
  How words and objects are represented, how those representations, in turn,
recognize words and objects, and are modified by those recognition processes,
are studied extensively with a set of experimental procedures called repetition
priming.  The sensitivity of the procedures to small effects of a single reading
of a word has made them useful for addressing questions in several theoretical
domains -- perception, learning, memory, and neurocognition.  But these
different efforts have not led to a unified model of these different processes.
This paper shows how the knowledge of these different domains, as well as
cognitive and computational neuroscience, and reading, can productively inform
one another. 
  A Recognition in Context and Learning model (RICAL) represents the structures
and processes that simultaneously enhance learning-based sensitivity and
short-term, context-based accuracy.  RICAL assumes that the process of reading
an unfamiliar word modifies the association cortex representation of that word,
and that this modification biases word representations in the word's cohort of
similar words, in favor of that word, while simultaneously correcting for
contrary small biases incurred by prior reading of other similar words, so that
the long-term consequence of successive biases results in enhanced accuracy,
i.e., long-term learning, of the word's representation.  Absent these biases,
learning cannot occur.  The basis for the largest repetition priming recognition
bias component, however, is the reversible flow of information through the
cortical-hippocampus (entorhinal cortex and CA3) conjunctions formed during
prior reading of the prime and target in a particular context.  Reading the
prime word, familiar or unfamiliar, links its representation to the prime, i.e.,
prior, reading context via the hippocampal circuit.  At test, this same context
information, via the same, but reversible circuit, sensitizes the cortical
representation of the primed word, so that ambiguous target features are
"interpreted" in favor of that word, producing a biased, but generally accurate,
recognition of the target word.

================================================================================

Bettina L. Beard
NASA Ames Research Center

A Methodology for Defining Occupational Vision Standards

  The majority of occupational vision standards are not empirically
substantiated, and appear to be arbitrarily decided.  We have developed a
methodology designed to specify vision needs in relation to specific
occupational tasks.  In collaboration with the FAA we are applying this
methodology to define vision qualifications for aviation maintenance inspectors
where no standards currently exist.  To apply the methodology to aircraft
inspection, we simulate visual deficits such as color weakness, mid-spatial
frequency contrast sensitivity loss, cataract, acuity declines, etc. through
manipulation of images of aircraft defects.

================================================================================

Randolph Blake
Vanderbilt University

Brain Areas Responsive to Biological Motion 

================================================================================

Geoffrey Boynton
The Salk Institute

Cortical Magnification in V1 Predicts Visual Acuity
Authors:  Geoffrey M. Boynton and Robert O. Duncan

  We compared visual acuity thresholds to areal cortical magnification factors
(ACMF) in primary visual cortex in 10 human observers.  Two acuity measurements
were acquired: (1) Vernier acuity measurements were made using standard
psychophysical techniques on a CRT, and (2) grating acuity thresholds were made
using a laser interferometer to bypass the optics of the eye.  The ACMF for V1
in both hemispheres was derived for each observer by fitting complex logarithmic
transformations to flattened representations of fMRI activity maps.  Vernier and
Grating acuity thresholds relate to ACMF by a power function of -1/2, which
means a fixed distance in V1 (~ 0.12 mm for Vernier acuity and ~0.18 mm for
grating resolution) represents the spatial offset of the Vernier stimulus at
threshold, regardless of the eccentricity of the stimulus.  Also, we found that
across subjects, low acuity thresholds are associated with larger amounts of
cortex in V1 representing the stimulus.

================================================================================

Ken Britten
UC Davis

MST and the Perception of Optic Flow

  Extrastriate area MST has been studied extensively, and many forms of evidence
suggest it plays a role in the perception of a complex, space-varying pattern of
motion termed "optic flow."  MST contains explicit signals representing such
motion patterns, mixed in with more conventional linear direction selectivity.
Little is known about the mechanisms that build this selectivity; it could in 
principle be done in a variety of ways.  Furthermore, little is known about the
manner in which these signals in MST are used to support perception of such
complex motion patterns.  We have been studying both of these questions, and the
results of two experiments will be presented and compared.  The first is a
physiological experiment measuring the summation of local vectors within MST
receptive fields, and the second is a psychophysical experiment investigating 
how human subjects' perceptual performance is affected by summation of motion
across space.  In both the physiology and perceptual experiments, there were
quantitative differences in the manner in which summation improved responses,
depending on whether the motion was uniform or space-varying.  This both
suggests a summation nonlinearity in the receptive fields of MST neurons, and
also that signals such as these limit the perception of space-varying optic
flow.

================================================================================

Scott Brown
UC Irvine

Sequential Sampling in Perceptual Choice Models: Is It Necessary?
Authors:  Scott Brown and Andrew Heathcote

  Currently, the most succesful models of perceptual choice can be classed as
"sequential sampling" models.  These assume that there is some intrinsic noise
in the internal representation of stimuli, and that choices involving such
stimuli take time because this noise must be offset by the integration of many
independent samples.  The diffusion model, leaky accumulator model, random walk
models and Poisson counter models all fall into this class.  Successive sampling
of noisy representations allows these models to match empirical regularities
involving speed-accuracy trade-offs and various relationships between latency
distributions for correct and error responses.  We present evidence from
mathematical analysis and computer simulation that the fundamental assumption of
variability in the stimulus representation within each decision process may not
be required in the leaky accumulator model.  Instead, other sources of
(between-trial) variability that are commonly assumed in such models can provide
many of the requisite effects.

================================================================================

Tom Busey
Indiana University

Stochastic Neural Activity in Face Processing Regions is Related to the
  Response to an Ambiguous Stimulus.
Authors:  Heather Wild and Tom Busey

  Previous research on binocular rivalry and motion stimuli suggests that
stochastic activity early in the visual processing stream can influence the
perception of an ambiguous stimulus.  In the present work we extend this to
higher-level tasks of word and face processing.  Using an added-noise procedure
with frozen noise, we separated responses to noise-alone trials based on the
observer's response (face or word).  We found a larger response in a component
previously associated with faces when the observer reported seeing a face in the
noise-alone stimulus.  The results suggest that stochastic activity in these
later perceptual regions can influence the behavioral response to an ambiguous
stimulus.  That is, when you think you see a face, it may be because of greater
activity in the face processing regions on that particular trial.

================================================================================

Gemma Calvert
University of Oxford

Multisensory Integration in the Human Brain

  Humans are equipped with multiple sensory channels through which to experience
the environment.  Each sense provides qualitatively distinct subjective
impressions of the world.  Despite the remarkable disparity of these sensations,
we are nevertheless able to maintain a coherent and unified perception of our
surroundings.  These crossmodal capabilities confer considerable behavioural
advantages.  As well as having the capacity to use this sensory information
interchangeably, integration of multiple sensory inputs can dramatically enhance
the detection and discrimination of external stimuli and speed responsiveness
(see Stein & Meredith, 1993).  Given the ubiquitous nature of crossmodal
processing for human experience, knowledge of the underlying neurophysiology
seems vital for a complete understanding of human sensory perception.
  Modern neuroimaging techniques now offer a method of studying these crossmodal
interactions in the intact human brain (Calvert, 2001).  The current challenge
is to identify a valid experimental framework for studying these phenomena.  To
date, there has been little consistency in terms of experimental design or
analytic strategy across different multisensory imaging studies.  Efforts to
identify brain areas involved in synthesizing crossmodal inputs, or regions
whose activity is enhanced or suppressed by intersensory influences have
included (i) the superimposition of two unimodal brain activation maps to
identify co-responsive sites, (ii) the use of conjunction analysis to extract
specific multisensory-specific activation areas and (iii) the identification of
crossmodal interaction effects which resemble the electrophysiological indices
of multisensory integration obtained in other species.  The consequences of
using one or other methodology to identify putative sites of multisensory
integration in humans will be illustrated using fMRI data from audio-visual
paradigms acquired in our own laboratory.
  Studies of visuo-tactile integration from our own laboratory are beginning to
suggest that certain principles of multisensory synthesis are modified depending
on whether integration benefits task performance and the modality that the
subject is instructed to attend to.  More recently, we have also begun to
investigate whether similar principles of multisensory facilitation or
suppression also apply in the case of the chemosenses, and whether a more
complete understanding of crossmodal mechanisms will require the synthesis of
different imaging techniques (EEG/MEG & FMRI).  Initial indications from such
multimodal approaches suggest that this may well be the best route forward to
identify not only sites of integration but also the nature of the processing
being carried out in different heteromodal and sensory-specific sites and their
time course.

================================================================================

Edgar A. DeYoe
Medical College of Wisconsin

Some Insights into Functional Similarities of Vision and Hearing

  In vision, directed spatial attention can lead to a perception of coherent
motion in an otherwise ambiguous display.  We sought to test for an auditory
analog.  Naïve subjects sat within a ring of eight speakers producing
independent 1 Hz sinusoidally amplitude-modulated white noise with adjacent
speakers 180 degrees out of phase.  All ten subjects perceived apparent sound
source rotation and 9/10 could voluntarily switch the apparent direction of
rotation by attending to different cues in the display, suggesting that an
attention-associated motion mechanism exists in audition.  This was confirmed
in experiment 2, in which subjects listened to ambiguous sound motion in a ring
of four speakers.  Attention directed to one of two unique marker sounds caused
the stimulus to disambiguate and rotate in a direction determined by the
attended marker.  As in vision, attention-related auditory motion appears to be
more sluggish than purely stimulus-driven motion, as evidenced by degraded
performance at 1-2 revolutions/s (rps), but not at 0.5 rps.  In experiment 3 we
tested cross-modality (audiovisual) motion perception.  Individual stimuli
occurred at successive locations around vertices of a diamond-shaped array in
front of the subjects alternating between lights and sounds.  Subjects were
required to integrate a motion path across the alternating cue to perceive
rotational motion.  However, no subject spontaneously perceived rotational
motion, only the independent alternation of lights and sounds.  Since a percept
of motion does not occur even in a non-ambiguous stimulus, it follows that
attention-based motion is unlikely to exist in a polymodal context even though
it occurs in both vision and audition independently.

================================================================================

Barbara Dosher
UC Irvine

Object attention

================================================================================

Ione Fine
UC San Diego

Vision in the Blind
Authors: I. Fine, A. R. Wade, A. A. Brewer, M. G. May, G. M. Boynton, B. A.
   Wandell, and D. I. A. MacLeod

  The effects of visual deprivation go beyond simple acuity losses.  Deprivation
can also cause impairments in global form processing, 3D shape perception, and
object and face recognition.  In contrast, performance on color and motion tasks
seems to be relatively robust to deprivation.  Consistent with this behavioral
data, fMRI activity in one observer's MT complex was as great and covered as
large an area as control observers, while responses to retinotopic stimuli in
V1/2 were weak and these areas appeared to be smaller than normal.  Face and
object stimuli did not produce activity in areas near fusiform and lingual gyri
associated with face and object processing.  Long-term interruptions in visual
experience, even beyond the traditional critical period, have significant
effects on visual processing, with form processing being particularly
susceptible to interruptions in visual input.

================================================================================

Wilson S. Geisler
University of Texas at Austin

Multiple-fixation Visual Search: Gaze-contingent Displays and the Ideal Searcher
Authors:  W. S. Geisler, J. Najemnik, & J. S. Perry

  Visual search in the real world typically involves integrating information
over multiple fixations of the eye.  Nonetheless, most research has focused on
single-fixation search tasks where stimuli are presented briefly.  At least two
factors have held back progress in understanding more natural search:  the
difficulty of precisely controlling and manipulating the stimulus on the retina
and the lack of an ideal observer theory for multiple-fixation visual search. 
To allow stimulus control in extended visual search tasks, we have developed
"gaze-contingent" software that allows precise real-time control of the content 
of a visual display relative to the observer's current gaze direction (measured
with an eye tracker).  Using this software, we measured search time and eye
movements while subjects searched for Gabor targets in 1/f noise.  We varied 
parametrically the target spatial frequency, the contrast of the noise, and the
rate of fall off in display resolution from the point of fixation.  This
experiment provides data on how much information can be removed from the
periphery (how much foveation can be tolerated) without affecting search time or
the pattern of eye movements.  We find that the shape of the function describing
search time as a function of the degree of foveation is dependent upon the
spatial frequency of the target, but is (interestingly) independent of the
contrast of the noise.  To provide the appropriate benchmark against which to
evaluate actual search performance and provide a starting point for developing
models for real performance, we have derived the ideal observer for visual
search in broadband background noise, where the ideal observer is constrained by
an arbitrary function describing sensitivity across the retina, and by some
level of internal noise.  We will describe the ideal observer and some of its
properties.

================================================================================

Jacqueline Gottlieb
Columbia University

Neurophysiological Mechanisms of Visual Attention in Monkey Posterior Parietal
  Cortex

  Abundant evidence, originating primarily in the study of the neurological
syndrome of neglect, has implicated the human posterior parietal cortex in the
control of attention. In contrast, relatively little is known about the
neurophysiological mechanisms of attention in the parietal cortex of the monkey.
The experiments I describe begin to elucidate the attentional functions of a
portion of the monkey's posterior parietal cortex, the lateral intraparietal
area (LIP). The vast majority of LIP neurons have visually-evoked responses with
circumscribed spatial receptive fields. These neurons provide a very selective
salience representation of the visual world, in which only objects that are 
likely to attract attention - either by virtue of their physical salience or of
their task-relevance - are strongly represented.  Although the selective
visually-evoked responses in LIP can contribute to the specification of putative
targets for saccades, these neurons are not dedicated to oculomotor control.
Instead, by virtue of its anatomical connections with both the visual and the 
saccadic systems, area LIP can concomitantly signal selection for saccades and
selection-for-perception (attentional selection).  This may account, at least in
part, for the close association between saccades and attention in natural
behavior.  In ongoing experiments using both single-unit recording and transient
pharmacological inactivation we are investigating specific links between LIP
activity and attentional orienting in several visual tasks.

================================================================================

Charles M. Gray
Montana State University

Adaptive Coincidence Detection and Dynamic Gain Control in Visual Cortical
  Neurons In Vivo

  Several theories have proposed a functional role for response synchronization
in sensory perception.  Critics of these theories have argued that selective
synchronization is physiologically implausible when cortical networks operate at
high levels of activity.  Using intracellular recordings from visual cortex in
vivo, in combination with numerical simulations, we find dynamic changes in
spike threshold that reduce cellular sensitivity to slow depolarizations and
concurrently increase the relative sensitivity to rapid depolarizations. 
Consistent with this, we find that spike activity and high frequency
fluctuations in membrane potential are closely correlated and that both are more
tightly tuned for stimulus orientation than the mean membrane potential.  These
findings suggest that under high input conditions the spike generating mechanism
adaptively enhances the sensitivity to synchronous inputs while simultaneously
decreasing the sensitivity to temporally uncorrelated inputs.

================================================================================

Kalanit Grill-Spector
Stanford University

The Neural Basis of Visual Object Recognition

  Humans recognize objects at an astonishing speed and with remarkable ease.
Multiple regions in the human ventral stream respond preferentially to objects.
Some regions display preference for specific categories such as faces or places.
How is the functional organization of these areas related to our ability to
recognize objects?  Here we tested whether different areas in the human ventral
stream are dedicated for (1) the recognition of different categories or (2)
different areas are specialized for specific recognition tasks.  Our results
reveal that different patterns of activation across the human ventral stream are
correlated with successful identification of different object categories.
However, for each category, the same regions are correlated with correct
detection and correct identification.  These data suggest that the functional
organization of the human ventral stream is organized more around stimulus
content than recognition task.  Furthermore, the activity in these higher order
areas is directly correlated to our ability to recognize objects.

================================================================================

Jim Haxby
Princeton University

Distributed Representations of Faces and Objects in Human Ventral Temporal
  Cortex

  The ventral object vision pathway, and in particular ventral temporal
extrastriate cortex, has the capacity to generate unique representations for a
virtually unlimited variety of individual faces and objects.  Functional brain
imaging research has demonstrated functional specialization in ventral temporal
cortex, suggesting that this method may be useful in decrypting the functional
architecture that underlies the neural representation of faces and objects. 
Previous work has demonstrated the existence of cortical regions that respond
preferentially to certain stimulus categories (faces or places) or are
associated with certain classes of perceptual processes (visual expertise).  By
contrast, we have argued that the representations of faces and objects are
distributed and overlapping.  According to our model, which we call "object form
topology", ventral temporal cortex contains a topologically-organized
representation of information about the visual appearance of faces and objects.
The representation of a face or object is reflected by a pattern of activity in
which both large responses and small responses carry information about the
appearance of that stimulus.  To test this model we have used functional
magnetic resonance imaging to investigate the patterns of response evoked in
ventral temporal cortex by the perception of faces and a wide variety of
different object categories.
  By dividing the data for an individual in half, we have shown that one can 
dentify the category of objects that a subject is viewing by analyzing the
similarity of the pattern of response evoked by that category in one half of the
data to the patterns of response evoked by all categories in the other half of
the data.  The ability to identify the category being viewed is not limited to
categories for which specialized systems may have evolved due to their
biological significance, such as faces, but is also seen for small manmade
objects, such as chairs, shoes, and bottles.  The category being viewed also can
be identified with high accuracy based only the pattern of response in cortex
that responds maximally to other categories.  These results demonstrate that the
specificity of the pattern of response for each category is a property of the
full extent of object-responsive cortex in the ventral temporal lobe, not just
the region that responds maximally to that category.  Within these patterns,
small responses as well as strong responses appear to carry information about
the appearance of faces and objects.

================================================================================

Rik Henson
University College London

Priming Face Recognition

  I will describe recent efMRI and ERP studies of face perception, recognition
and priming that suggest 1) the N170 associated with face perception is most
likely generated from the superior temporal sulcus rather than "fusiform face
area" (FFA), 2) the FFA is also associated with face recognition, possibly via
interactions with more anterior temporal/frontal regions, and 3) priming effects
on face recognition are seen in the FFA, but these reflect late, probably
re-entrant, effects.

================================================================================

David E. Huber
University of Colorado, Boulder

Establishing a Correspondence Between Activity Dependent Neural Dynamics and 
  Inference in a Generative Model of Perceptual Identification
Authors:  David E. Huber and Randall C. O'Reilly

  In recent years, generative Bayesian belief networks have successfully
characterized many information processing systems.  Such models assume that
conceptual representations are responsible for generating observations.  For a
given causal structure and a given input, an inference process determines which
concepts are the most likely generators.  We extend the responding optimal with
unknown sources of evidence (ROUSE) model of Huber, Shiffrin, Lyle, and Ruys
(2001), recasting the theory as a generative Bayesian belief network.  This
allows unification of the original four ROUSE equations, as well as a method for
investigating graded activation and graded priming.  By activating ROUSE with
appropriate dynamics, we show how ROUSE mimics the neural network model proposed
by Huber and O'Reilly (in press).  In particular, the inference process 
commonly known as "explaining away" is related to activity dependent neural
accommodation.  This mimicry between the two levels of description suggests that
neural accommodation may have evolved as a method for limiting excessive
persistence from previously identified items. 

================================================================================

Lynne Kiorpes
NYU

Extended Development of Global Visual Functions 
Author: L. Kiorpes and J. A. Movshon

  The critical period for visual development is typically considered to coincide
with the time period over which visual acuity develops.  Recent studies in
monkeys and humans have shown that some visual functions have different critical
periods and some, such as Vernier acuity and contour integration, develop more
slowly and over a much longer period of time than simple grating acuity.  We
studied visual functions that require integration of information over space and
time and compared their development to that for basic spatial vision tasks in
Macaca nemestrina.  The results show that visual development continues over a
longer period of time than was previously thought.
  We studied three types of global visual functions: contour integration, motion
discrimination, and form discrimination.  Contour integration was measured by
detection of the location of a coherent ring of Gabor patches in a field of
randomly-arrayed and oriented Gabors.  Motion discrimination was tested by
detection, and discrimination of the direction, of motion in random dot
kinematograms.  Form discrimination was tested by detection of linear,
concentric, or radial organization in Glass patterns.  Contrast sensitivity
functions were measured for comparison. The animals were tested at ages ranging
from 3 weeks to adult. 
  Contour integration ability develops late and over a longer period of time
compared to contrast sensitivity.  While contrast sensitivity is adult-like by
9-12 months, contour integration develops beginning around 4 months and
continues over 1.5-2 years.  Motion discrimination ability is apparent within
the first 3 postnatal weeks, but develops over a long time course up to about 3
years of age.  Form discrimination is relatively difficult for the animals.  
This ability, like contour integration, develops late, but continues to improve
over several years.  The data show that complex visual functions develop over a
much longer period of time than the classical critical period.

================================================================================
 
Lenny Kontsevich
Smith-Kettlewell Eye Research Institute

Trajectory Correlation: An Alternative to 2-D Correlation in Object Recognition

  To perform recognition, the visual system has to match 2-dimensional visual
input with memory representations.  Most recognition models rely on
2-dimensional matching, which is inflexible and computationally taxing. I will
demonstarte that a much better approach is to perform matching of 1-dimensional
trajectories (in the input image) with 2 or 3-dimensional representations in
memory. This matching can accommodate various kinds of transformations such as
scaling, rotation, perspective distortions, minor nonlinear distortions, etc.
During the matching process, an imprecise initial guess about the transformation
iteratively converges to its accurate value.  This computational scheme can be
easily embedded into a framework of the known vision mechanisms in humans,
imposing interesting (and plausible) constraints on these mechanisms.
  The proposed scheme was implemented as a program for recognition of cursive
characters.  Its operation will be demonstrated.

================================================================================

Zhong-Lin Lu
USC

TBA

================================================================================

Rene Marois
Vanderbilt University

Psychophysical and fMRI Studies of the Capacity Limits of Visual Attention

================================================================================

Timothy McNamara
Vanderbilt University

Sketch of a Theory of Human Spatial Memory

  For the past several years, we have been trying to determine how the locations
of objects in the environment are represented in memory and how remembered
spatial relations are used to guide action in space.  Our findings have led us
to develop a new theoretical framework for conceptualizing human spatial memory.
According to this theory, learning the spatial structure of a new environment
involves interpreting it in terms of a spatial reference system.  This process 
is analogous to determining the "top" of a figure or an object; in effect,
conceptual "north" is assigned to the layout, creating privileged directions in
the environment.  Our working hypothesis is that reference systems intrinsic to
the collection of objects are used (e.g., rows and columns formed by chairs in a
classroom).  Intrinsic directions or axes are selected using cues, such as
viewing perspective and other egocentric experiences (e.g., instructions), the
structure of the layout (e.g., it may appear to be square from a given
perspective), aspects of the surrounding environment (e.g., geographical slant),
and properties of the objects (they may be grouped based on similarity or 
proximity).  An important difference between form perception and spatial memory
is that whereas figures in the frontal plane are oriented in a space with a
powerful reference axis, viz., gravity, the locations of objects are typically
defined in the ground plane, which does not have privileged axes or directions
(e.g., humans cannot perceive magnetic fields).  We therefore propose that the
dominant cue in spatial memory is egocentric experience.  The intrinsic
reference system selected at the initial learning position establishes the 
interpretation, and hence, the memory of the layout.  This reference system
appears to be updated or changed only if a subsequent viewing position is
aligned with more natural axes in the surrounding environment.  In my
presentation, I will summarize the experimental findings that led to the
development of the theory and the results of recent experiments designed to
test it.

================================================================================

Tony Movshon
New York University

The Role of Horizontal Intracortical Connections in "Long-range" Spatial
  Interactions
Authors:  J. A. Movshon, J. R. Cavanaugh, and W. Bair

  In primary visual cortex, as in other cortical areas, neurons are linked by a
system of horizontal excitatory connections that extend over distances of 2-8
mm.  These connections are said to carry signals outside the "classical"
receptive field (CRF), and it is commonly thought that they are responsible for
a variety of "long-range" and "feature-linking" effects observed
psychophysically.  Previous studies have used a conservative definition of CRF
size, the minimum response field (MRF).  But MRF measurement misses parts of the
CRF that are too insensitive to generate spikes when stimulated alone.
  We have measured the size of the CRF in macaque V1 neurons using a grating
summation technique.  On average the MRF underestimates the area of the CRF by a
factor of 4 at high contrast.  At low contrast, the suppressive surround is
weakened and the area of summation increases by an additional factor of 6.
Using published visuotopic maps, we projected our measured CRFs onto the
cortical surface, and found that the majority have radii that correspond to
horizontal cortical distances of 2-6 mm.  We conclude that horizontal
intracortical connections do not link regions outside the CRF, but simply serve
to construct the CRF itself.  Such lateral linking connections are needed to
allow convergence from the small and topographically precise RFs of cells in
layer 4c to the larger RFs of cells in other cortical layers.  Our results
suggest that the circuits responsible for psychophysical long-range interactions
lie outside primary visual cortex.

================================================================================

Jeff Mulligan
NASA Ames Research Center

Eye Movement Dynamics Reveal the Time-Course of Visual Motion Processing

  When a subject views a moving stimulus, his eyes often move (either
voluntarily or involuntarily).  The delay between stimulus events and correlated
responses reflects a combination of motor system delays and visual processing
latencies.  Small variations (10's of milliseconds) are observed due to
variations in low-level parameters such as mean luminance and contrast, while
larger delays (100's of milliseconds) are observed when subjects track targets
defined by equiluminant color variation or flicker-defined (second-order)
motion.  Binocular stimulation with independent motion trajectories allows
simultaneous analysis of vergence and version eye movements.  The latency for
vergence eye movements is longer than for version, and does not show the
characteristic 4 Hz oscillation seen for version.  The method allows temporal
dissection of visual mechanisms not easily obtained from psychophysical methods.

================================================================================

Tatiana Pasternak
University of Rochester

Cognitive Influences in Cortical Area MT

  During the visual working memory task, many neurons in area MT are active
while the monkeys remember the motion of the previously viewed sample stimulus
in preparation for comparing it to the upcoming test stimulus.  The activity
during the 1.5 sec memory period (the delay) consists of a brief activation
early in the delay, followed by prolonged inhibition and subsequent reactivation
in anticipation of the upcoming test.  Early activation reflects the direction
and other properties of the remembered sample.  The late reactivation is also
affected by the direction of the remembered stimulus and is strongly amplified
when the presentation of the expected test is postponed by 1.5 sec.  This
pattern of delay activity changes when the expected test is removed from the
receptive field (RF).  When it is placed in a predictable location in the
opposite hemifield, the duration of early delay activation no longer reflects
the direction of the remembered sample.  On the other hand, when the test
location is not predictable and is switched randomly on each trial between the
RF and the opposite hemifield, early activation is stronger and lasts longer and
this effect depends on the direction of the remembered stimulus.  We also found
that many  MT neurons respond to  motion stimuli presented in the opposite
quadrant of the visual field and that these responses have longer latencies than
the same stimuli presented in the RF.  Thus, MT neurons are affected by
behaviorally significant stimuli presented in visual field represented in the
opposite hemisphere as well as by spatial uncertainty and expectation.  These
results suggest an active connection between MT and neurons in cortical regions 
monitoring large portions of the visual field and possessing the information
about the cognitive aspects of the task.  We hypothesize that MT neurons active 
during the delay may constitute a distinct class of neurons that receive
top-down influences arriving from cortical components of the circuitry
underlying visual working memory.

================================================================================

Misha Pavel
Oregon Health and Science University 

Pervasive Digital Healthcare:  Technology in Support for Successful Aging
Author(s):  Misha Pavel and Holly Jimison

================================================================================

John D. Pettigrew
University of Queensland

Searching for the Switch:  Focussing on the Timing of Perceptual Rivalry 
  Alternations
Authors:  J. D. Pettigrew and O. Carter

  There is intense controversy about the neural basis of perceptual rivalry,
with a recent position statement by Blake and Logothetis covering a wide range 
of possible viewpoints.  In this paper I will concentrate on the timing of the
rivalry alternations rather than attempting to deal with the quality of the
alternate percepts.  Evidence will be presented that the source of the timing
signals is the ventral striatum, based on rivalry studies involving fMRI,
psychotropic drugs and patients with the major psychoses.

================================================================================

Zygmunt Pizlo
Purdue University
Authors: Z. Pizlo & Z. Li
Human Problem Solving - A New Direction

  The task in solving many problems is to find a series of transformations
(path) from a current state to some goal state, with an additional criterion
that the path is short (possibly the shortest).  Modern research on problem
solving began in the 1950s with Newell & Simon's cybernetic approach.  According
to this approach, a problem solver evaluates the difference (distance) between
the current and the goal state and then chooses transformations which allow
reduction of this difference.  This approach has dominated research on problem
solving in Psychology and Artificial Intelligence during the last half a
century.  One implication of this approach is that if a problem solver cannot
estimate distances among the states, she has to perform an exhaustive search
through the problem space.  There is, however, at least one class of problems,
namely navigation in a Euclidean space, which allows determination of the
shortest path without using distances.  The key concept is the 'direction' of a
vector connecting the start and the end point.  In our previous project we
illustrated how this approach leads to fast and close-to optimal solutions of
the Traveling Salesman Problem on a Euclidean plane.  The next step was to
generalize the concept of 'direction' to the case of problems that do not have a
Euclidean representation.  We show that this can be done by using a
graph-pyramid representation of a problem.  The pyramid representation is
obtained by performing hierarchical clustering of states of the problem.  The
problem is then solved in a top-down process of refining approximations to the
solution.  This process relies on the topology of the pyramid representation,
rather than on distances or dissimilarities.  As a result, the problem is solved
without search.  We will illustrate our approach by presenting results of
psychophysical and simulation experiments with one class of non-Euclidean, NP
complete problems. 

================================================================================

Neural recording and decision models
Roger Ratcliff, Anil Cherian, and Mark Segraves
Northwestern University

Recently, models in psychology have been shown capable of accounting for the
full range of behavioral data from simple two-choice decision tasks: mean
reaction times for correct and error responses, accuracy, and the reaction
time distributions for correct and error responses. At the same time, recent
data from neural recordings have allowed investigation of the neural systems
that implement such decisions. In the experiment presented here, neural
recordings were obtained from superior colliculus prelude/buildup cells
in two monkeys while they performed a two-choice task that has been used
in humans for testing psychological models of the decision process. The
best-developed psychological model, the diffusion model, and a competing
model, the Poisson counter model, were explicitly fit to the behavioral
data. The pattern of activity shown in the prelude/buildup cells, including
the point at which response choices were discriminated, was matched by the
evidence accumulation process predicted from the diffusion model using the
parameters from the fits to the behavioral data, but not by the Poisson
counter model. These results suggest that prelude/buildup cells in the
superior colliculus, or cells in circuits in which the superior colliculus
cells participate, implement a diffusion decision process or a variant of
the diffusion process.
  
================================================================================

John Reynolds
The Salk Institute

The Role of Competitive Circuits in Macaque Extrastriate Cortex During
  Selective Attention to One of Two Spatially Superimposed Stimuli

  Single unit recording studies of attention in the monkey have identified
competitive circuits in the extrastriate cortex that could mediate selection of
either spatial locations or coherent objects.  These studies have found that
when two stimuli appear together in a cell's receptive field, they activate a
competition that is resolved in favor of the attended stimulus.  While these
studies show that attention operates by resolving competition, they have all
employed objects that appear at separate locations, and this confounds selection
of objects with selection of spatial locations.  Here we report the results of
recent single-unit recording studies of attention in monkeys performing an
object-based attention task.  In this task, monkeys discriminated brief changes
in the motion of one of two stimuli that were spatially superimposed and could
not, therefore, be selected by a purely spatial attentional mechanism.  We find
evidence that competition occurs between neurons that are selective for each of
the two superimposed stimuli.  Further, when one of the two stimuli is
exogenously cued, it dominates neuronal responses for a period of several
hundred milliseconds, which is similar to the time over which human observers
are impaired in discriminating brief changes in the uncued stimulus.  These
results show that competitive selection circuits in extrastriate cortex are
engaged regardless of whether stimuli occupy the same location or separate
locations in space, a necessary condition for neural mechanisms of object-based
selection.

================================================================================

Michael E. Rudd
University of Washington

Perceptual and Neural Filling-in of Achromatic Color:  A Computational Model

  Many contemporary studies of lightness perception are guided by a basic
theoretical model in which lightness is computed in three stages involving:
1) extraction of the edge contrast or luminance ratios at the locations of
luminance borders within the image; 2) spatial integration of the border signals
to establish a scale of relative lightness values for the regions lying between
borders; and 3) anchoring of the relative lightness scale to a physical referent
(commonly assumed to be the highest luminance in the scene) in order to produce
an absolute lightness scale.  One important implication of this theory is that
the lightnesses of regions lying between borders are perceptually filled in by
the brain.  I will review some key findings that support this basic scheme for 
computing lightness and then describe a specific computational model of
lightness processing that I have developed to account for data from my own lab
and from the literature.  A key assumption of the model is that achromatic color
is computed from a linear combination of lightness and darknes induction signals
that spread spatially from borders and decay with distance.  The model yields a 
quantitative theory of spatial edge integration that will be shown to provide a
good fit to experimental data.

================================================================================

Michael D. Rugg
University College London

Neural Correlates of Episodic Memory Encoding

  Recent studies with fMRI using the 'subsequent memory procedure' have
attempted to delineate the brain regions and circuits supporting episodic
encoding.  In this procedure, brain activity elicited by items at the time of
study is contrasted according to whether the items are remembered or forgotten
on a subsequent memory test.  Regions demonstrating differential activity in
this contrast are considered as candidates for the support of encoding
operations.  We have found that the cortical regions identified with this
procedure vary markedly as a function of both study material and task.  In only
a minority of studies have we detected 'subsequent memory effects' in the
hippocampus.  The implications of these findings for current ideas about
episodic encoding and its neural bases will be discussed. 

================================================================================

Rod Shankle
UC Irvine

Omental Transposition in Alzheimer's Disease: Neuroimaging and Clinical Results

================================================================================

Steve Shevell
University of Chicago

A Cortical Receptive Field Accounts for Color Shifts Induced by Chromatic
  Patterns

  Color perception depends on the neural representation of light within visual
pathways.  While a single wavelength has a characteristic color when seen
against a dark background, the same wavelength can appear a different hue when
part of a complete scene.  Contrary to prevailing theory, measurements show that
the shift in color appearance caused by a patterned background composed of two
chromaticities can be far larger than the color shift from a uniform background
at either chromaticity within the pattern.  This implies that human color
perception depends on the spatial structure of chromatic context, not on pooling
of responses from various background regions, or on information implicit in the
various chromaticities in view (as used by theories of color constancy).
Cortical receptive-field organization accounts for these large color shifts.

================================================================================

Richard Shiffrin
Indiana University

Memory Representations of Single Items and Pairs
Authors:  Amy Criss and Richard Shiffrin

  Single items (words and faces) and different types of pairs (face-face;
face-word; word-word) are represented in surprisingly independent fashion!

================================================================================

George Sperling and Ching Elizabeth Ho
UC Irvine and Caltech

Deriving the properties of motion systems from a competition paradigm

  Procedure.  Motion stimuli consisting of rows of grating patches are
constructed to produce apparent movement in either of two directions
depending on which patches perceptually match in successive frames
(Werkhoven, Sperling, and Chubb, VisRes 1993, 1994; Ho, PNAS-USA, 1998).
In a first-order direction, patches match in luminance (light/dark);
in a second-oder direction, patches match in texture-contrast (high/low);
in a third-order direction, patches match in slant orientation (+-45deg).
Perceived movement was measured in first-order versus third-order displays
and second-order versus third-order, frequencies 1-30Hz.  Stimuli were
presented interocularly (motion perception requires combining signals from
both eyes) or monocularly; the slant cue was either present or absent
(same slant throughout).
  Results.  Perceiving the third-order direction requires the slant cue;
third-order dominates below 5 Hz.  Analysis.  Each competition type
(1vs3, 2vs3) yields two independent, remarkably consistent estimates
of the temporal tuning functions for each competitor.  First- and
second-order motion peak around 10 Hz whereas third-order motion declines
monotonically with frequency reaching zero at 5 Hz.  First- and
second-order are monocular, third-order is indifferent to
monocular/interocular but requires attending to slant-direction.
  Conclusion.  The temporal, monocular/interocular, and attentive
properties of the three perceptual motion systems can be derived from
from a motion-competition paradigm and are consistent with previous findings.

================================================================================

Mark Steyvers
University of California, Irvine

The Topics Model for Semantic Representation
Authors:  Mark Steyvers and Tom Griffiths

================================================================================

Bosco Tjan
University of Southern California

Human fMRI Studies of Visual Processing in Noise
Authors:  B. S. Tjan, V. Lestou, Z. Kourtzi, W. Grodd, and H. H. Buelthoff

  Processing of visual information entails the extraction of features from
retinal images that mediate visual perception. In the human ventral cortex,
early and higher visual areas (e.g. Lateral Occipital Complex-LOC) have been
implicated in the analysis of simple and more complex features respectively.  To
test how processing of complex natural images progresses across the human
ventral cortex, we used images of scenes and added visual noise that matched the
signal in spatial-frequency power spectrum.  The resulting images were rescaled
to ensure constant mean luminance and rms contrast across all noise levels.  We
We localized individually in each observer the retinotopic regions and the LOC
and measured event-related BOLD response in these regions during a scene
discrimination task performed at 4 noise levels.  Behavioral performance
increased with increasing signal-to-noise ratio.  We found that log %BOLD signal
change from fixation baseline vs. log SNR is well-described by a straight line
for all visual areas.  The regression slope increased monotonically from early
to higher areas along the ventral stream.  For example, changes by a factor of 8
in SNR produced little or no change to the BOLD response in V1/V2, but resulted
in progressively larger increases in V4v, posterior, and anterior subregions of
the LOC.  These findings suggest that the use of visual noise can reveal the
progression in complexity of the natural-image features that are processed
across the human visual areas.  

================================================================================

Roger Tootell
Harvard Medical School

TBA 

================================================================================

Patrik Vuilleumier
University of Geneva

Top-down Emotional Influences From Amygdala on Face Processing

  Results from fMRI studies in patients with medial temporal lobe damage show
that an intact amygdala is necessary to enhance visual cortical responses to
fearful expression in faces, modulating several visual areas at both early and
late stages of processing.  Such modulation appears to operate through direct
ipsilateral feedback connections, and independent of voluntary attentional
control.  Fear-related responses in other areas including insula and cingulate
cortex also appear modulated by the amygdala.  These effects provide a neural
substrate by which attention may be summoned more readily by emotional than
neutral stimuli.

================================================================================

Anthony D. Wagner
MIT

The Cognitive Neuroscience of Memory With and Without Recollection

  A central function of memory is to permit an organism to distinguish between
stimuli that have been previously encountered and those that are novel. From one
perspective, recognition of a previously encountered stimulus may be based on
conscious recollection of specific episodic details of the prior encounter or on
a sense of stimulus familiarity in the absence of recollection.  Alternatively,
single-process theorists have posited that recognition might be accompanied by
recollection or familiarity, but that these subjective states reflect a
quantitive difference along a single memory dimension.  Recent fMRI studies that
explore the cognitive and neurobiological bases of recognition will be discussed
to address a set of fundamental questions:  (a) Do recollection and familiarity
differ quantitatively or qualitatively?  (b) What are the neurocognitive
processes that contribute to the building of memories that ultimately support
recognition with or without recollection? (c) During attempts to remember, can
individuals strategically allocate attention to recollective and nonrecollective
forms of memory? and (d) What is the relation between retrieval orientation and
the outcome of the retrieval attempt?  Evidence will be discussed that suggests
that memory formation partially depends on an interaction between cognitive
control processes that are subserved by the prefrontal cortices and binding
mechanisms that are mediated by the medial temporal lobes.  The specific
computations that build memories with and without recollection appear to be
separable, pointing to a qualitative distinction between these two forms of
remembering.  Moreover, during attempts to remember, multiple control processes
can be strategically recruited to orient towards, and "work with," recollective
as opposed to non-recollective knowledge.  Collectively, these data indicate
that the ability to recognize a previously encountered stimulus emerges from a
complex interplay between distinct neurocognitive circuits that support
conscious recollection and stimulus familiarity.

================================================================================

Anna Zalevski
Oxford University

Conflict Between Horizonal Disparitiy and Vertical Scaling in Stereoacuity
Authors:  A. M. Zalevski, L. I. Browning, G. B. Henning, and N. J. Hill

  The precision with which the relative depth of two vertical lines is judged
can be as small as 5 seconds of arc.  However, McKee (Vision Research, 23, 1983)
showed deterioration in detecting relative depth when the vertical lines are
perceptually linked.  When the stimuli form part of a square, for example,
stereoacuity is often more than 40-fold worse.  However, the vertical lines used
in McKee's research had the same vertical -- unchanged with changing horizontal
disparity, thus introducing cue conflicts.  We used two, 25' vertical lines and
connected vertical lines (horizontally separated by approximately 20') presented
on matched CRTs and viewed binocularly in a modified Wheatstone stereoscope at a
distance of 3 m.  The relative depth of the vertical lines was judged using two
1-s presentation intervals.  The observers were required to choose the interval
in which the leftmost vertical line appeared closer.  Three conditions were
tested:  (a) as in McKee's experiment, with only horizontal disparity available
(creating possible cue conflicts); (b) with both horizontal disparity and
vertical scaling (size and perspective cues available; and (c) with vertical
scaling alone.  Our results were like Mckee's in that when conflicting vertical
scaling and disparity information was present, stereoacuity with closed (square)
stimuli was very much worse than with simple vertical lines.  However, when
disparity and vertical scaling provided consistent information about relative
depth, the effect almost disappeared; stereoacuity with squares was only
slightly worse than with vertical lines.  Vertical scaling in the absence of
horizontal disparity provided a strong cue to relative depth when the lines were
connected (squares) but not for simple vertical lines.  A partial explanation
is that conflict between disparity and vertical scaling, introduced when the
physical length of the vertical lines is kept constant despite changing
horizontal disparity, produces the marked deterioration in stereoacuity
previously found with closed configurations.

================================================================================