Teton Village, Jackson Hole, Wyoming
                             February 1 - 6, 2004
        Organizer:  George Sperling, University of California, Irvine



Benjamin Backus
University of Pennsylvania

Recalibration in Mechanisms for Measuring Relative Disparity

  Wallach (1968) argued that adaptation to distortions in sensory input can
often be understood as corrective responses to "informational discrepancy."
Perceptual adaptations show the system to be plastic, and one supposes that
mechanisms must exist to keep perceptual estimators calibrated with respect to
the world.  One strategy is to keep them calibrated with respect to one another.
We developed a model system for studying this process.  The relative disparity
between two points in a scene can be measured from their retinal relative
disparity (RRD), or from the change in vergence required to fixate them in turn
("delta vergence" or DV).  Human observers use both mechanisms (Backus &
Matza-Brown, 2003).  In the laboratory RRD and DV can be put into conflict, so
that they specify different relative disparities.  If the conflict persists,
one might expect the visual system to recalibrate one, the other, or both
mechanisms.  Current theory (e.g. Ghahramani, Wolpert & Jordan, 1996) predicts
that the relative rates of adaptation for RRD and DV would be proportional to
their reciprocal variances-the more reliable cue will be used to recalibrate
the less reliable cue.  However, the visual system may know that some perceptual
mechanisms fall out of calibration more quickly than others, in which case the
rate of adaptation should also be proportional to the rate at which loss of
calibration occurs (Backus, VSS 2003).  RRD is measured directly from the
retinal images, so the RRD mechanism could be quite stable over time, whereas
the DV mechanism depends on the use of controlled eye movements, and eye
movements are constantly recalibrated by the visual system.  We reasoned that
the DV mechanism ought to adapt, rather than the RRD mechanism, even when DV is
the more reliable (lower variance trial-to-trial) cue.  Experimentation supports
this hypothesis.


Randolph Blake
Vanderbilt University

The Colorful Perceptual World of Synesthesia

  Synesthesia - the mental mixture of real and illusory sensory experiences - is
incredibly fascinating to hear about but frustratingly complex to study.  Those
of us who are not synesthetes are spellbound by the accounts of those who are,
but at the same time we are mystified by why these mixtures would occur.  In
recent work, my colleagues and I have focused on color-graphemic synesthesia:
the perception of color when viewing achromatic alphanumeric characters.  Our
aim is to develop "objective" psychophysical strategies for going beyond the
colorful verbal accounts of individuals with this "condition" -- in particular,
we have sought to learn the extent to which synesthesia is genuinely perceptual
in nature, and in this presentation I will summarize some of our work addressing
this question.


Matteo Carandini
Smith-Kettlewell Eye Research Institute

Suppressive Fields and Adaptive Responses in Early Visual System

  The responsiveness of neurons in the early visual system depends on the
prevalent stimulus statistics.  In lateral geniculate nucleus (LGN), the output
of the classical receptive field is divided by that of a suppressive field,
which computes the local variance of the stimulus.  Responsiveness, thus,
depends on the pattern of stimulation in and around the receptive field. 
Similar and stronger mechanisms are at work in primary visual cortex (V1). 
Here, suppressive fields have complex preferences for stimulus size and
orientation, and profoundly affect responses.  Thanks to powerful adaptation
mechanisms, moreover, responsiveness in V1 also depends on prior history of
stimulation.  There are competing explanations, based on synaptic, cellular and
anatomical mechanisms, for how neurons in early visual system might achieve
division and adaptation.


Charles Chubb
University of California, Irvine

Human Visual Sensitivity to Contrast is Three-dimensional
Authors:  Chubb, C., Sperling, G., & Landy, M.S.

  How many distinct, contrast-selective mechanisms does human vision possess?
We show that the answer is three.  We address this question by investigating
preattentive discrimination of randomly scrambled, achromatic textures composed
of mixtures of different (Weber) contrasts.  We call such textures "scrambles."
Scrambles differ not at all in local spatial structure, but only in the relative
proportions of different contrasts they comprise.  We show that (like color
discrimination) preattentive discrimination of scrambles is three-dimensional.
Three mechanisms that account for our results are the Brightness, Energy and
Blackshot mechanisms, for which we provide empirically derived sensitivity
functions.  The impact exerted on texture Brightness by a texture element
(texel) of contrast c is approximately proportional to c (implying that texture
Brightness is approximately proportional to the mean contrast of the texture). 
The impact exerted on texture Energy by a texel of contrast c is roughly a
parabolic function of c (implying that texture Energy is approximately equal to
the variance of patch contrast).  Texture Blackshot is influenced only by the
very darkest texels.  Specifically, Blackshot level is increased sharply by
texels of contrast -1.0, slightly by texels of contrast -0.875, and not at all
by texels of contrast >-0.875.  Thus texture Blackshot is roughly proportional
to the proportion of texels with contrast -1.0.  Under our theory, any two
scrambles with equal levels of Brightness, Energy and Blackshot should be
indiscriminable within the texture system.  In support of this claim, we
demonstrate some dramatic texture metamers, i.e., scrambles with drastically
different histograms that appear identical because they have equal Brightness,
Energy and Blackshot.  Although the Brightness and Energy mechanisms are
expected from previous theories, the Blackshot mechanism is largely


Karen Dobkins
University of California, San Diego

Development of Motion Processing in Human Infants 

  Several studies have demonstrated infants' ability to discriminate direction
of 1-dimensional (1D) moving contours.  However, it is not yet clear whether
infants integrate local 1D motions into global, pattern (2D) motion.  Our
laboratory has been investigating this using an eye movement technique that
measures subjects' ability to track leftward vs. rightward pattern motion in a
stimulus consisting of a field of spatially segregated moving gratings.  Each
grating moves in one of two directions (72 deg vs. -72 deg or 108 deg vs. 252
deg with the two directions interleaved across the display.  When spatially
integrated, pattern motion for these paired component motions is 0 deg
(rightward) or 180 deg (leftward), respectively.  To control for the possibility
that horizontal eye movements elicited by this stimulus are due to the
horizontal motion vector present in each obliquely-moving grating, we also
measure responses to a field where every grating moves in the same direction
(72 deg, -72 deg, 108 deg, or 252 deg).  The difference in performance between
the integration stimulus and this control stimulus is taken as a measure of
integration.  Data from 2-, 3-, 4- & 5-month-olds reveals significant motion
integration, suggesting that higher-order motion areas, such as MT, may develop
at a relatively early age.  In addition, the integration effect decreases
consistently and significantly with age, suggesting a reduction in the spatial
extent of motion integration over the course of development. 
  In a second series of experiments, we have been investigating the development
of context effects on motion processing by studying the barber pole (BP)
illusion, in which perceived direction of a moving grating viewed through a
rectangular aperture is biased along the major axis of that aperture.  This
effect is thought to reflect integration of 1D motion signals of the grating
interior, which are ambiguous, with 2D motion signals of the line terminators,
which are unambiguous and yield a mean 2D signal dependent on aperture
orientation.  To study this phenomenon in infants, we used a directional eye
movement technique (described above) to measure the ability to track leftward
vs. rightward motion of obliquely-moving gratings viewed through horizontal (H)
vs. vertical (V) apertures.  The BP effect, quantified as the difference in
percent correct performance for H vs. V apertures, was ~10% and did not vary
significantly between 1 and 5 months of age.  In another group of infants, we
quantified the BP effect by obtaining the "equivalent direction" (EqDIR),
defined as the direction of motion in H apertures that yields the same
performance as a 45 deg grating moving within V apertures. The "effective shift"
in perceived direction produced by the aperture orientation is then calculated
as the difference between EqDIR and 45 deg.  Preliminary data from this
experiment indicate that the effective shift is ~16 deg in infants age 3- to
5-months.  These results suggest that, like adults, infant motion processing is
influenced by 2D motion signals produced by line terminators.


Barbara Dosher
University of California, Irvine

Mechanisms and Limits of Perceptual Learning:  Learning luminance and Texture
Authors:  Dosher, B., & Lu, Z.-L.

  Perceptual learning is a change, usually an improvement, in performance of a
perceptual task reflecting plasticity in perceptual processing -- a change of
state of the observer.  Dosher & Lu (PNAS,1998; Vision Research, 1999)
introduced external-noise tests and a noisy ideal observer framework, the
perceptual template model (PTM) (Lu & Dosher, Vision Research,1998, JOSA, 1999;
Dosher & Lu, Psychological Science, 2000), to characterize mechanisms of state
change in perceptual learning.  Changes in processing state due to plasticity
are associated with either (i) improvements in external noise exclusion by
template retuning (high noise), (ii) stimulus enhancement through gain
magnification (low noise), (iii) changes in system non-linearity properties, or
(iv) mixtures of these mechanisms.  External noise exclusion is analogous to
filtering in signal processing and to retuned sensitivity in physiology.
Stimulus enhancement is analogous to amplification in signal processing, and to
gain enhancement in physiology.  In a range of tasks, learning improves external
noise exclusion in high noise environments, or it improves stimulus enhancement
in low or zero noise environments, or both, in a pattern that is inconsistent
with simpler notions that perceptual learning as improved efficiency. 
Perceptual learning, instead, is both task dependent and may decouple mechanisms
of learning in high and low noise.  Here we report a salient example in which
perceptual learning in a task in which objects are defined by second-order
(texture) information shows learning limited to stimulus enhancement (or
internal noise reduction) that is evident in low noise conditions only.  This
finding is analogous to previous findings of stimulus enhancement in second
order motion processing (Lu, Liu, & Dosher, Vision Research, 2000).  These
results suggest the importance of stimulus enhancement or reduction in limiting
internal noise for intrinsically noise representations of second-order stimuli.


James T. Enns
University of British Columbia

Multiple Object Tracking is Scene-based, not View-based

  This study asked whether multiple object tracking--the ability to visually
index an object based on its spatial-temporal history--is premised on a
scene-based or image-based representation.  Initial experiments showed that
tracking was comparable for objects moving in depiction of 2D and 3D scenes,
meaning that MOT was impaired similarly in these two cases by increases in the
speed of object motion.  Experiments were then conducted in which object speed
was manipulated independently of the speed of movement of the scene as a whole
(scene speed).  The results showed that object speed had a large influence on
accuracy, but that scene speed had no measurable influence.  This held whether
the scene underwent translation, zoom, rotation, or even a combination of all
three motions, which we termed the "wild ride."  In a final series of
experiments we taxed observers' ability to perceive a coherent 3D scene in two
ways.  In one condition observers tried to track objects moving at different
speeds in the same scene (multiple speeds reduce scene coherence) and in the
other they tried to track objects moving at identical retinal speeds but
perceived to be in either a coherent or a distorted 3D space.  Both of these
manipulations reduced tracking accuracy, consistent with tracking being
accomplished within a scene-based or allocentric frame of reference.


Wilson Geisler
University of Texas, Austin

Transient Response Properties of V1 Neurons
Authors:  Geisler, W.S., Albrecht, D.G., Frazor, R.A., & Crane, A.M.

  Under most natural viewing conditions, the eyes fixate a given location for
200-300 ms and then move on to another location.  Thus, we have been
quantitatively investigating the contrast and spatial response properties of
macaque and cat striate neurons to transient stimuli presented for 200 ms in the
classical receptive field.  Post stimulus time histograms (PSTHs) were measured
as a function of contrast, spatial frequency and spatial phase.  The main
results are as follows.  (1) The shapes of the PSTHs over the first 200 ms of
the response vary widely from cell to cell, but within a cell they are (largely)
invariant with contrast, spatial phase and spatial frequency.  (2) Saturating
contrast response functions are observed as soon as a response increases above
baseline (even in the first 10-20 ms of the response).  (3) The latency (time
shift) of the PSTH decreases with stimulus contrast, increases with spatial
frequency, and is independent of spatial phase.  (4) The maximum of the contrast
response function (Rmax) varies over time and spatial phase, but otherwise the
shape of the contrast response function is invariant (once the latency effect is
taken into account). (5) The latency shift with spatial frequency results in
large changes in the peak spatial frequency (average of 1 octave) during the
first 30-50 ms of the response.  (6) Most of the detection information
transmitted by the neurons is contained in the first 100 ms.  The results for
the dimensions of contrast and spatial phase strongly suggest that contrast
normalization and response expansion (accelerating nonlinearities) are fully
established within a very brief time period (i.e., as soon as the response can
be measured).  We show that such rapid gain control makes ecological sense given
the statistics of contrast in natural scenes.  Finally, we demonstrate that the
latency effects with spatial frequency are (on average) consistent with the
hypothesis that most striate neurons receive input from neurons in both the
magnocellular and parvocellular layers of the LGN.


Norberto M. Grzywacz
University of Southern California

Does Adaptation Optimize the Retina?

  Adaptation allows biological sensory systems to adjust to variations in the
environment and thus to deal better with them.  In the case of the retina,
several theories postulate that the role of adaptation is to maximize the amount
of GENERIC luminance information delivered to the rest of the brain.  However,
we will show that they fail to account for some data on horizontal-cell
adaptation.  In particular, these theories cannot account for how the receptive
fields of these cells adapt to changes of environmental background luminance.  
An alternative that is more successful postulates that retinal adaptation
optimizes the extraction of SELECTIVE kinds of information.  They include
contrast, intensity, and edge detection.  The proposed optimization is Bayesian,
requiring prior knowledge of their natural statistics and of the limitations of
the mechanisms processing the information.  One problem with such optimization
is that the environment changes in time and one must specify how to know what
the current state of the environment is. 
  To solve this problem, the retina may optimally estimate the environmental
state from the temporal stream of responses.  We show that such optimal
estimation is a generalized form of Kalman filtering.  An application of this
Kalman-filtering framework to retinal contrast-adaptation data yields excellent
results.  The success of this and related theories suggests that retinal
adaptation is a form of constrained biological optimization.


David Heeger
New York University

Wave of Activity in V1 Correlates with Waves of Dominance During Binocular


Scott P. Johnson
New York University

Rule Learning in Infancy

  A hallmark of human cognition is its flexibility:  our ability to learn and
retrieve information, reason, categorize, hypothesize, and predict future events
under a wide variety of circumstances.  Two central issues involved in
investigations of cognitive flexibility and knowledge acquisition are (a) the
distinction between learning simple associations vs. more abstract rules, and
(b) the effects of the nature of the stimulus input to learning mechanisms, a
phenomenon sometimes referred to as domain-specificity.  The development of 
abstract reasoning in humans, likewise, is of considerable theoretical
importance, yet there has been no systematic investigation in the literature of
its origins in infants.  I will describe the initial efforts toward this goal
that are ongoing in my lab.  The emphasis at present is twofold, following the
issues highlighted previously.  First, I am investigating whether young infants
are more adept at statistical learning or rule learning.  Second, and in
parallel, I am exploring the possibility that rule learning is facilitated by
particular kinds of input, such as speech sounds (when learning an auditory
rule) vs. simple colored shapes (when learning the same rule instantiated in
visual stimuli).  At present the data appear to suggest that rule learning may
represent a more protracted developmental process relative to statistical 
learning, and that speech may enjoy a privileged status in early rule learning.
These tentative conclusions are made more complicated by the fact that some
rules are more readily acquired than others.


Lynne Kiorpes
New York University

Development of Visual Motion Mechanisms
Authors:  Kiorpes, L., & Movshon, J.A.

  We have studied the development of sensitivity to motion in macaque monkeys,
using dynamic random-dot kinematograms whose motion the monkeys detect or
discriminate.  Through longitudinal testing, we have found that development of
sensitivity to visual motion has an extended time course.  Such basic visual
functions as acuity are adult-like 6-9 months post-natal, but motion sensitivity
develops over 3 years.  Infants are able to integrate cues to visual motion by 3
weeks after birth, but reach asymptotic performance over several years. 
  We studied visual motion processing physiologically in 1-, 4-, 16-week, and
adult monkeys, by recording from single neurons in cortical areas V1 and MT.  In
V1, we found remarkable maturity of neuronal selectivity.  In particular,
direction selectivity was adult-like in newborns.  However, response latency was
long and temporal resolution was poorer in infants than in adults.  This pattern
was even more marked in MT.  Visual latencies were extraordinarily long for
infant cells and responses to high temporal frequencies were very weak.  Most
striking was a paucity of pattern direction selective cells in infant MT.  We
will discuss implications of the physiological findings for behavioral


Zoe Kourtzi
Max Planck Institute

fMRI Studies of Plasticity in the Primate Visual Brain

  Postnatal plasticity of the brain is required for the development of complex
adaptive cognitive behavior.  Both the functional maturation of the nervous
system and sensory experience contribute to the development of complex cognitive
functions.  We use fMRI as a non-invasive tool for the longitudinal study of
developmental and learning-based neural plasticity in the primate brain.  We
study the plasticity of the mechanisms that mediate coherent visual perception
in the human and the monkey brain.  Our human fMRI studies showed a strong link
between behavioral improvement in visual shape discrimination after training and
neuronal plasticity across early retinotopic and higher occipitotemporal areas.
Interestingly, these studies showed that the neural mechanisms underlying
perceptual learning in the human visual brain are modulated by attention.  That
is, the visual brain learns to enhance the saliency of targets in cluttered
scenes, but requires focal attention to represent the features critical for
their discrimination and recognition.  Finally, we present current longitudinal
fMRI studies on infant monkeys in an attempt to trace the maturation and
plasticity of the neural mechanisms that contribute to coherent form and motion


Maria Kozhevnikov
Rutgers University

Spatial Versus Object Imagers: A New Characterization of Visual Cognitive Style

  Recent theories of mental imagery distinguish between two types of imagery,
visual-object and spatial.  We found the same dissociation in individual
differences in imagery. Two hundred undergraduate psychology students and 63
members of different professions were administered a computerized battery of
spatial and object imagery tests.  Our results show that some people are
better at constructing vivid and detailed images (object imagers), whereas
others are better at constructing schematic images which of spatial
relations (spatial imagers).  Moreover, object imagers usually perform below
average on spatial imagery tests, while spatial imagers perform below
average on object imagery tests.  The most significant distinction was found
between scientists and visual artists.  Visual artists were significantly
better than scientists on object imagery tests and reported object imagery
preferences, whereas scientists outperformed visual artists on spatial
imagery tests and reported spatial imagery preferences.


Peter Lennie
New York University

Some Peculiarities of Contrast Adaptation 
Author(s):  Peter Lennie, Sam Solomon, Jon Peirce, & Neel Dhruv

  Prior exposure to a moving grating pattern of high contrast leads to a
substantial and persistent reduction in the contrast sensitivity of neurons in
the lateral geniculate nucleus (LGN) of macaque.  This form of contrast
adaptation, which hitherto has been thought to occur only in visual cortex, is a
distinctive characteristic of magnocellular (M) cells but not of parvocellular
(P) cells.  Simultaneous recordings of M-cells and the potentials of ganglion
cells that drive them show that the adaptation arises in ganglion cells.  As
would be expected from the spatio-temporal tuning of M-cells, the adaptation is
broadly tuned for spatial frequency and is not orientation-selective.  
Adaptation can be induced by high temporal frequencies to which cortical neurons
do not respond, but not by low temporal frequencies that are potent adaptors of
cortical neurons.  Our observations show that contrast adaptation must occur at
multiple levels in the visual system, and they provide a new way to reveal the
function and perceptual significance of the M-pathway.


Zhong-Lin Lu
University of Southern California

Fast Decay of Iconic Memory in Observers At Risk for Alzheimer's Disease
Authors:  Lu, Z.-L., Neuse, J., Madigan, S.A., & Dosher, B.A.

  Yang (Dissertation, NYU, 1999) reported an unusual observer who showed very
fast decay of iconic memory.  Unexpectedly, this observer was diagnosed two
years later with Alzheimer's disease.  Is fast decay of the partial-report
superiority effect an early sign of Alzheimer's disease?  Mild Alzheimer's
patients generally have significant long-term episodic and semantic memory
deficits as well as deficits in working memory tasks, even though they are at
most slightly impaired compared to normals in auditory and spatial short-term
memory tasks.  No systematic study of partial report superiority has been
carried out in the Alzheimer's population.  In this study, we assessed iconic
memory using the partial report paradigm (Sperling, 1960) in three groups:
people at-risk for Alzheimer's disease (CDR: 0.5 to 1.0), college-age young
controls, and older controls.  In addition, we assessed cognitive performance of
the at-risk and old control groups with a neuropsychological test battery.  We
found:  (1) The at-risk observers performed significantly more poorly in a
number of neuropsychological tests; (2) In pre- and simultaneous cue conditions,
both the at-risk and the older control groups performed above 90% correct with
no significant difference between the two, suggesting adequate and equivalent
visual letter identification; (3) Neither the capacity of iconic memory nor the
capacity of short-term memory was significantly correlated with age or CDR,
suggesting constant iconic and short-term memory capacity over age and CDR; and
(4) The duration of iconic memory was very short (< 50 ms) for the at-risk
observers compared to normal adults (270 ms).  This difference remained
significant after age was partialled out.  We discuss our results in light of
recent physiological studies of the locus of sensory memory and theories on
aging and Alzheimer's disease.


Kenneth J. Malmberg
Iowa State University

The Status of Single-Process Models of Remember-Know Judgments:  Misconceptions
and Resolutions

  Models that assume that a continuous random variable (e.g., familiarty, 
similarity, etc.) is the basis for recognition memory decisions have dominated
the field for at least 35 years (e.g., Green & Swets, 1966).  While these models
are almost universally determined to be overly simplistic (cf. Gillund &
Shiffrin, 1984), disconfirming behavioral evidence has remained elusive.  Endel
Tulving (1983) proposed that memories can be organized into two mutually
exclusive classes.  One class allows for the awareness of context specific
details of past events.  When in such a state, one is said to be "remembering"
a past event.  In the absence of such episodic details, Tulving proposed that
one may nevertheless "know" that a certain event occurred.  Over the past 15
years or so, Tulving's hypothesis has been tested numorous times in the
recognition memory paradigm by asking subjects to indicate whether their
decisions are made on the basis of "remembering" or "knowing."  A common finding
is that an operation has opposite effects on P(remember) and P(know), and these
findings have been used to reject continuous-state models of recognition memory.
In this talk, I'll resolve several misconceptions about continuous-state models,
and I'll show that the Retrieving Effectively Memory model (Shiffrin & Steyvers,
1997; Malmberg, Zeelenberg, & Shiffrin, in press) can readily account for the
the effects of item strength, list length, normative word-frequency, and
midazolam amnesia on remember-know judgments.

Tim McNamara
Vanderbilt University

Semantic Priming:  Beyond Spreading Activation and Compound Cues

  In 1971, David E. Meyer and Roger W. Schvaneveldt published an article in the
Journal of Experimental Psychology entitled, "Facilitation in recognizing pairs
of words:  Evidence of a dependence between retrieval operations."  This article
would become one of the most influential articles published in cognitive
psychology.  In the first experiment, twelve high-school students were asked to
decide whether two simultaneously presented strings of letters were both words
(e.g., table-grass) or not (e.g., marb-bread).  Of the word-word pairs, half
were semantically related (e.g., nurse-doctor) and half were not (e.g.,
bread-door).  On the average, responses were 85 milliseconds faster to related
pairs than to unrelated pairs.  This phenomenon came to be known as "semantic
priming."  Semantic priming occurs in many cognitive tasks, including lexical
decision, naming, and semantic categorization.  The ubiquity of semantic priming
suggests that it is caused by fundamental mechanisms of retrieval from memory.
Semantic priming is commonly used as a tool for investigating other aspects of
perception and cognition, such as word recognition, sentence and discourse
comprehension, and knowledge representations.  My presentation will review
theoretical and empirical advancements in the scientific understanding of
semantic priming, with particular emphasis on developments in the past 10 years.
One of my conclusions is that traditional models of semantic priming, such as
spreading-activation and compound-cue models, are much too simple to account for
many of the complex findings in the literature.


Tony Movshon
New York University

Adaptation Properties of Neurons in Macaque MT

Jeffrey B. Mulligan
NASA Ames Research Center

Polarization Analysis of the Eye Movement Correlogram

  The eye movement correlogram is obtained by reverse correlation of eye
velocity with the velocity of a randomly moving target.  It resembles an impulse
response, typically peaking at a latency between 100 and 200 milliseconds.  (The
latency increases systematically with decreases in mean luminance and contrast,
and is affected by other stimulus parameters as well.)  The target moves
randomly in two dimensions, and the analysis is performed independently on the
horizontal and vertical components.  Striking differences are seen in the
correlograms obtained in these two directions.  Presumably these differences are
due to neurological differences in the ennervation of the horizontal and
vertical rectus muscles.  More detail can be gleaned about these differences
from a "polarization" analysis.  Because it cannot be known for sure that the
H and V axes of the eye tracking system are aligned with either the
physiological axes, or the axes of the stimulator, individual correlograms are
computed for uniformly sampled combinations of measurement angles and stimulator
angles.  Next, a principal components analysis is performed on the set of
signals.  The resulting factor loadings vary sinusoidally with the stimulus
angle, but the phase of this variation is different for each component.  The
overall form of the data is consistent with linear superposition of two
mechanisms having different time courses, and the geometric relation between the
mechanisms can be estimated.  A similar analysis may be applied to binocular
tracking data to estimate the location of the "cyclopean eye."


Michael J. Mustari
Emory University

The Role of Early Visual Experience in Development of Oculomotor Behavior
Authors:  Mustari, M.J., Tusa, R.J., Das, V.E., Burrows, A., Economides, J., &
Fu, V.

  It is well known that early visual experience plays an essential role in
development of normal visual function.  Less well understood is the role of
early visual experience in the development of gaze-holding, eye-alignment and
oculomotility.  Nevertheless, it is clear that the visual and oculomotor systems
must work together during early postnatal development to produce a fully
functional visual-oculomotor system.  Unfortunately, the synergistic interaction
between visual and oculomotor systems can be easily disrupted by conditions that
interfere with coordinated binocular vision or eye movements.  For example,
subjects with infantile strabismus have at least three linked disorders
including misalignment of the eyes, unsteady gaze-holding (e.g., latent
nystagmus) and asymmetric smooth pursuit.  We have developed effective rearing
procedures for rhesus monkeys that produce animals where gaze-holding,
eye-alignment, smooth pursuit and saccades are differentially affected.  Our
strabismic monkeys evince alternating fixation during smooth pursuit and
saccades.  Such alternating fixation behavior suggests that each eye is being
attended to even though only one eye is actively fixating.  Therefore, to
produce goal-directed saccades, different error signals associated with the
fixating and non-fixating eyes must be taken into account.  The neural
mechanisms and substrate responsible for producing alternating fixation is
unknown.  Our behavioral and single-unit recording studies are beginning to
define the possible neural substrate associated with altered visual-oculomotor
performance in strabismic monkeys.  We find that structures in the brainstem
such as the pretectal nucleus of the optic tract (NOT) are responsible for at
least some gaze-holding disorders.  We expect that more complex behaviors such
as selecting targets for fixation during saccades or smooth pursuit involve
cortical areas exerting top-down influence on the oculomotor system. 


Anthony M. Norcia
Smith-Kettlewell Eye Research Institute

Experience Expectant Development of Contour Integration Mechanisms

  A defining feature of contours in the natural environment is collinearity of
local orientation along the contour.  The normal adult visual system is
extremely good at integrating collinear cues that are widely separated, even
under noisy conditions.  This specialization shows an extended developmental
sequence and is susceptible to disruption by abnormal visual experience during
early visual development.  We find, using Visual Evoked Potentials, that infants
are virtually unable to differentiate co-circular arrangements of Gabor elements
from random ones, even in the absence of a noisy background --- a task that is
trivial for adults.  Adults with a history of disrupted binocular vision that
leads to amblyopia also show an insensitivity to Gabor-defined contours.  Subtle
abnormalities are also present in non-amblyopic eyes of patients with
strabismus, even if neither eye is amblyopic.  These results suggest a
surprising dependence of contour integration mechanisms on normal binocular
experience during development.  Although our task and most other contour-
integration tasks use 2-D stimuli, which are ?accidental views? of  the full
range of natural contours that are oriented in 3-D.  The  visual system may thus
have evolved mechanisms which expect 3-D input for their successful refinement
during development.


Tatiana Pasternak
University of Rochester

MT Neurons "Know" About Behaviorally Relevant Motion Stimuli Far Removed From
  Their Receptive Fields
Authors:  Pasternak, T., & Zaksas, D.

  Neurons in cortical area MT have localized receptive fields representing the
contralateral hemifield and have been shown to play an important role in the
discrimination of visual motion.  We recorded the activity of these neurons
during a behavioral task in which visual motion stimuli appeared at locations in
the ipsilateral hemifield, far removed from the classical receptive field. 
Specifically, the monkeys performed a working memory task in which one or both
of the two comparison stimuli separated by a delay, the sample and the test, 
were presented at the location remote to the neuron's receptive field.  Three
quarters of the 127 recorded neurons responded to stimuli placed in the
ipsilateral hemifield.  Some cells showed excitation when remotely presented
sample and/or test moved in the preferred direction, while others were strongly
suppressed.  Excitatory firing rates of these remote responses were about 20% of
the maximal response to the preferred stimulus in the receptive field, while
firing rates during the remote suppression dropped by about 50% below the
baseline.  Both types of responses were directional, occurred at least 40-50ms
later than the responses to the stimuli placed in the receptive field and their
activity reflected the level of coherence in the random-dot stimulus.  Although
responses to the remote sample and test were similar, there were notable
differences.  Excitation during the remote test was more pronounced and less
directional than responses during the remote sample.  These remote effects did
not require any of the stimuli to be presented in the receptive field and also
occurred when both sample and test were placed in the remote location.
  Since area MT is strongly retinotopic, neural activity associated with remote
stimuli is unlikely to be generated locally.  Rather, such effects are
indicative of top-down influences from cortical areas with access to the
information from the entire visual field.  Such influences are supported by the
differences in remote responses to identical stimuli with different behavioral
significance and by their long latencies.  These results demonstrate that during
the behavioral task requiring processing and retention of motion stimuli, the
information about these stimuli and their behavioral relevance reaches MT in the
opposite hemisphere. 


Misha Pavel
Oregon Health and Science University

Computer-Based Cognitive Assessment
Authors:  Pavel, M., Jimison, H., & Pavel, J.

  In many countries, people above 65 years are the fastest growing segment of
the population, with an increasing percentage of health care resources being
spent on conditions associated with aging.  The early detection of changes in
cognitive abilities is important in the management of elder care and in
facilitating independent living for as long as possible.  Any potentially
successful approach to detection must be (1) as unobtrusive as possible and
(2) intrinsically motivating.  In this study, we have first investigated the
type of games that elderly computer users are eager to play.  We then developed
a research version of a popular computer game (FreeCell) so that elders'
performance can be monitored and trended on a daily basis.  To summarize the
results we developed a metric that appears to be correlated with traditional
tests of cognitive abilities.  This indicator of cognitive performance provides
a framework for detecting trends in cognitive performance and for distinguishing
between elders with normal cognitive functioning and those with mild cognitive


Alexandre Pouget
University of Rochester

Relating Behavioral Performance to Population Codes in Networks of Spiking 

  Many studies have attempted to relate changes in behavioral performance (e.g.
as a result of perceptual learning or increased attention) to changes in neural
codes.  In most of those models, the response of individual neurons is
decomposed as a sum of a tuning curve and a noise term.  This approach has led
to one general principle:  the steeper (or the narrower) the tuning curve, the
better the performance.  This result relies on the assumption that changes in
the tuning curve do not affect the noise distribution.  We have started to
investigate the validity of this assumption in networks of spiking neurons.  We
have focused our work on networks in which neurons fire spike trains with
near-Poisson statistics due to balanced synaptic inputs (balanced in the sense
that neurons receive about as much excitation as inhibition).  We report that, 
in these kind of networks, it is impossible to change tuning curves without
changing the noise distribution and, in particular, pairwise correlations.  As a
result, there are many situations in which steeper tuning curves result in worse
performance.  These findings highlight the key role of correlations in 
population codes and the importance of using multielectrode recordings when 
evaluating the information content of a code.


Lynne Reder
Carnegie Mellon University

The Effects of Midazolam in Visual Search:  More Evidence that the Hippocampus
Affects Non-explicit Memory in Humans
Authors:  Park, H., Thornton, E., Quinlan, J., & Reder, L.

  The hippocampus is widely thought to serve explicit memory exclusively because
studies of hippocampal damaged amnesiacs seem to have spared implicit memory.
In this study, normal subjects were tested on an implicit visual search task
under midazolam, an anesthetic that inhibits hippocampal activity and induces
temporary amnesia.  Unlike the control condition, subjects did not show
facilitation in search times for targets appearing in repeated configurations
under the influence of midazolam.  The findings provide direct evidence that the
hippocampus is responsible for forming associations, regardless of accessibility
to conscious recollection.


John Reynolds
Salk Institute

Neural Mechanisms of Attention in Monkey Extrastriate Visual Cortex

  Visual perception seems effortless, but psychophysical experiments show that
the brain is severely limited in the amount of visual information it can process
at any moment in time.  For instance, when people are asked to identify the
objects in a briefly presented scene, they become less accurate as the number of
objects increases.  The inability to process more than a few objects at a time
reflects the limited capacity of some stage (or stages) of sensory processing,
decision-making, or behavioral control.  Somewhere between stimulating the
retina and generating a behavioral response, objects compete with one another to
pass through this computational bottleneck.
  What are the neural mechanisms underlying this competition?  How are they
influenced by intrinsic properties of the stimulus, such as its visual salience?
How does visual attention modulate this competition to select out behaviorally
relevant stimuli while suppressing irrelevant distractors?  I will describe
psychophysical and single-unit recording experiments we have conducted to
address these questions.  The results of these experiments clarify the role of
attention in modulating visual signals, and provide a set of constraints that
rule out some possible models of extrastriate visual processing.  I will present
a simple cortical circuit that satisfies these constraints.


Andrew Rossi
Vanderbilt University

Top-down Deficits in Target Selection in Monkeys with Prefrontal Lesions
Authors:  Rossi, A.F., Bichot, N.P., Harris, B.J., Desimone, R. & Ungerleider,

  Physiological studies have shown that neural activity in area V4 can be 
modulated by changes in the behavioral relevance of a stimulus.  These studies
suggest that extrastriate cortex may represent a stage in visual processing 
where 'top-down' attentional inputs can influence the representation of
bottom-up stimulus information.  To examine the role of cortical feedback in the
mediation of attentional effects in extrastriate cortex, we removed prefrontal
cortex unilaterally in combination with a transection of the corpus callosum and
anterior commissure in two adult macaques.  As a result, visual processing in
only one hemisphere could be modulated by prefrontal feedback.  Monkeys were
trained to discriminate the orientation of a colored target presented among
colored distractors.  The color of the fixation spot cued the target for
discrimination.  The stimuli were presented in either the control or the
affected hemifield.  We found that orientation thresholds in the affected
hemifield, but not the control hemifield, increased with increasing frequency of
cue change.  To determine if the performance deficit was the result of a
disruption of top-down processing, we trained the monkeys to perform a
'bottom-up' variation of the task in which the target for discrimination was
defined by color pop-out.  For this task, performance in the affected hemifield
was not differentially affected by the frequency of target change.  We attribute
the difference in performance between the two tasks to an increase in the "attentional load" or top-down component in the cueing task.  When the target
identity was defined by color pop-out (bottom-up task), target selection was
presumably accomplished by local cortical mechanisms.  These findings support
the notion that prefrontal cortex is involved in the attentive selection of
behaviorally relevant stimuli.


Clifton Schor
University of California, Berkeley

Adaptable Coordination of Binocular Eye Alignment With Direction of Gaze
Authors:  Schor, C., Maxwell, J. & McCandless, J.

  Voluntary eye and head movements are accompanied by involuntary vertical and
cyclo vergence components that maintain binocular eye alignment.  Combined
voluntary and involuntary movements are coordinated by an intrinsic
sensory-motor transform (SMT).  Efference copy signals from the voluntary
component are transformed to a kinematic plan of eye rotation vectors that guide
the control of vergence by an inverse plant model.  Neural control of
cross-coupling is simplified by mechanical linkages (e.g. orbital muscle
pulleys) and it is preprogrammed to optimize ergonomics, and viewing geometry.
For example, combined voluntary and involuntary movements described by
Listing's law optimize motor efficiency and binocular disparity stimuli for
space perception.  However, preprogrammed responses are inappropriate for novel
loads on the oculomotor system that can result from development, disease,
trauma, and environmental factors such as optical distortions.  These loads are
corrected by adaptive calibration of either the SMT or inverse plant. 
  We examined the versatility of the calibration process by measuring the
spatial spread of vertical- and cyclo-disparity vergence aftereffects trained at
a limited number of eye/head positions and compared the response patterns to the
generalization predicted by several feed-forward models.  These comparisons
yielded two non-exclusive adaptation models of the coupling between vergence
with eye/head position.  Disparity vergence is the error signal that is reduced
by adaptation.  Adaptation responses to monotonic changes in combined vergence
and eye/head position were modeled with changes of the inverse plant that
adjusts muscle bias and gain and the location of orbital muscle pulleys.  
Adaptation responses to non-monotonic changes in vergence were modeled more
centrally at the SMT with a neural network and association matrix that combine
efferent correlates of head and eye position.  Synaptic connections between the
vergence controller and associated efference copy signals, derived from eye
position and otolith signals in the brainstem, are weighted during adaptation. 
Following training, the adjusted vector sum of efference copy signals, weighted
by the coupling SMT, continues to guide vergence in association with eye and
head orientation.  The two sites of adaptation could compensate for peripheral
and central neuromuscular disorders. 


Michael N. Shadlen
University of Washington 

A Neural Integrator for Decision Making

  In the context of deciding between two sensory hypotheses, a simple difference
in spike rates from sensory neurons with different selectivity approximates a
log likelihood ratio in favor of one sensory interpretation over another.  I
will summarize experimental evidence from the alert monkey that supports the
following tentative conclusions.  (1) The brain uses such a difference to make
decisions about the direction of motion in a 2-alternative direction
discrimination task.  (2) The accumulation of this difference to threshold
explains the speed and accuracy of simple decisions.  (3) Neurons in the lateral
intraparietal area (LIP) act as integrators:  their activity reflects the
accumulation of the spike rate difference from direction selective neurons.  In
addition to experimental evidence for this scheme, I will attempt to address the
question of how general is the approximation to log likelihood ratio and what
this might mean for neural coding in general. 


Shihab Shamma
University of Maryland

Rapid Plasticity of Spectrotemporal Receptive Fields in Primary Auditory Cortex

  We investigated the hypothesis that task performance can rapidly and
adaptively reshape cortical receptive field properties in accord with specific
task requirements and salient sensory cues.  Neuronal responses were recorded in
the primary auditory cortex of behaving ferrets that were trained to detect a
target tone of any frequency and to discriminate between tones of different
frequencies.  Cortical plasticity was quantified by measuring focal changes
observed in units' Spectro-Temporal Response Fields (STRFs) in a series of
passive and active behavioural conditions.  The experimental design enabled STRF
measurements to be made simultaneously with task performance, providing multiple
snapshots of the dynamic STRF during ongoing behaviour.  Attending to a specific
target frequency during the detection task consistently induced localised
facilitative changes in STRF shape, which were swift in onset.  In the
discrimination task, there was significant suppression of the STRF at the
reference tone frequency.  The collective effect of such modulatory changes may
enhance overall cortical responsiveness to the target tone and increase the
likelihood of "capturing" the attended target during the detection and
discrimination task.  Some receptive field changes persisted for hours following
task performance and may contribute to long-term sensory memory.


Steve Shevell
University of Chicago

Assimilation Assimilation (with apologies to Chubb, Sperling & Solomon)

  CS&S show that physical contrast in one part of the visual field affects the
contrast perceived in a separate region.  The work here reveals a related
phenomenon for assimilation: chromatic assimilation within surrounding regions
can carry over to a separate area.  This new and substantial color shift is
explained by perceptual grouping: a color shift in the direction of chromatic
assimilation in one part of the visual field, due to local chromatic context,
carries over to a separate region that belongs to the same perceptual group.  In
experiments, the color appearance of a test square within various surrounds was
measured by asymmetric matching.  When the test square was at the center of an
"hourglass" structure formed by other elements in the surround, the test shifted
in color appearance toward the appearance of the other hourglass elements, whose
color was affected by local chromatic induction.  All of the measurements are
accounted for by chromatic assimilation among elements perceived to belong to
the same group.


Richard Shiffrin [Sunday]
Indiana University

Skiing the Backcountry

  Ascending and descending mountains is far superior in winter than summer:  No
sweaty overheating; no insects, more isolation, prettier scenery, and, not
least, descending in deep untracked powder.


Richard Shiffrin [Friday]
Indiana University

Perceiving Words With Case and Color Without Using Case and Color
Authors:  Sanborn, A., Malmberg, K., & Shiffrin, R. 

  A briefly presented low-contrast word (e.g. BRAIN), adjusted to be at
threshold, has a given case and color.  Following the presentation two choices
are presented.  The choices may differ in a) spelling (e.g. BRAIN-DRAIN), b)
case or color (e.g. BRAIN-brain), or c) both (e.g. BRAIN-drain).  If the target
is not masked, performance is better when case differs (conditions b and c are
superior).  If the target is masked by a form and/or color mask (e.g. @-signs
alternating in color), then spelling differences produce superior performance
(e.g. conditions a and c are superior).  We suggest that not only form and color
information are extracted from the target flash before the mask arrives, but
also (occasionally) higher level features such as letter or even word identity.
Without a mask, the form and color information is more informative and used for
decision making.  However, we suggest that the mask adds form and/or color
noise, leading observers to base decisions on higher-level information such as
letter or word identity.  Interestingly, such higher level information seems to
be dissociated from form and color information, in the sense that the
higher-level coding evidently does not retain form and color codes.


George Sperling 
University of California, Irvine

Quantifying Visual Spatial Attention
Authors:  Sperling, G., Shih, S., Gobell, J., & Tseng, C.

If time permits I'll describe two paradigms that lead to formal, computational
theories of visual spatial attention.  In the first, with Shu-I-Shih, subjects
view a rapd sequence of 3x3 letter arrays. At a random moment, a high-, middle-,
or low-pitch tone directs them to report the top, middle, or bottom row of
letters from the earliest array simultaneous with or after the tone.  From
various analyses of their reports, we can infer that Ss open an attentional
gate to admit letters to visual short-term memory about 0.1 sec after the
tone simulataneously at all attended locations, and that the time course of
the attention gate, about 0.3 sec, is the same at all attended locations. That
is, information is accumulated in parallel at all locations within a single
attentional glimpse.
   The second experiment, with Joetta Gobell and  Chia-huei Tseng, uses a
rapid stream of 12x12 arrays of disks.  Subjects must attend to a subset of
6 rows (or six columns) in which a larger disk, the target is to be detected.
Ten false targets are placed in the unattended locations. The row subsets are
arranged to form gratings.  The ability of the subjects to modulate attention
to conform to the requested grating enables a Fourier analysis of attentional
modulation.  From this analysis, a predicted response of spatial attention to
any requested distribution of attention can be computed.  Insofar as it is
successful (too soon to say), this method would completely characterize the
spatial constraints on attention.


Mark Steyvers
University of California, Irvine

The Author-Topic Model:  A Generative Model for Documents Based on Authors and
Authors:  Steyvers, M., Rosen-Zvi, M., Griffiths, T., & Smyth, P. 

  We present the author-topic model, a generative model for authors and
documents based on the  Latent Dirichlet Allocation model (Blei, Ng, & Jordan,
2003) that reduces the generation of documents to a simple series of
probabilistic steps.  Each author is associated with a topics mixture and the
choice of words of a collaborative paper is assumed to be the result of a
mixture of the authors' topics mixtures.  The model is applied to a collection
of 1.7K NIPS conference papers and 160K CiteSeer abstracts.  We show that the
model is able to extract interpretable topics and that the extracted topic
mixtures associated with authors is sensible.  Based on the derived
representations, statistical inference can be used to pose the following
queries:  1) what topics does a given author write about?  2) given a document,
what author is most likely to have written about the topics expressed in the
document?  3) how broad is the research of an author as expressed by the topics
distribution?  4) how unusual is a paper for a given author? and 5), what author
is similar to a given author?  These queries are not only relevant when
exploring a scientific domain or developing an author profile, but also in
practical situations when finding targets for funding or assigning reviewers to
a paper or grant proposal.


Sharon Thompson-Schill
University of Pennsylvania

What Do the Parietal Lobes Know About Objects? 

  The link between processes involved in object recognition and processes 
involved in thinking about the appearance of objects has been established by
neuropsychological and neuroimaging evidence of an association between concept
retrieval and the ventral visual processing stream.  For example, when thinking
about the color of a banana, increased blood flow has been observed in areas
selectively activated during color perception.  However, investigations of the
neural bases of perception have widely supported the idea that there are at
least two parallel processing streams in visual perception: a ventral stream
that supports object recognition and a dorsal stream that supports object 
localization and visually-mediated action.  This latter stream has largely been
ignored in explorations of the neural basis of concept retrieval.  We propose
that (i) the dorsal visual processing stream supports the retrieval of some
aspects of object knowledge (i.e., shape and size), and (ii) the extent of
dorsal stream involvement in concept retrieval will vary across objects as a
function of spatiomotor interactions with each object.  Our hypothesis extends
current theories about the role of the parietal lobes in grasping and other 
visually-guided actions into the domain of long-term knowledge, in support of
distributed, modality-specific models of concept knowledge.  In addition, we
will argue that our account can provide a modality-specific explanation of
neuropsychological deficits that have been interpreted elsewhere as evidence for
category-specific mental representations.


Adrian von Muhlenen
University of British Columbia

Does Motion Capture Attention?

  Spatial attention can be attracted by a variety of cues (e.g., brightness
changes, arrows, auditory tones).  This paper will present a series of studies
that explored the ability of motion to attract attention.  One study used moving
background dots as a stimulus in a simple detection task.  The results showed
that systematic background motion per se does not capture attention, with the
exception of looming motion: There was a strong advantage for targets presented
inside the core of the looming motion.  Another study explored the specific
timing conditions under which a single moving object can capture attention.  
First results suggest that the onset and offset of motion, but not the
continuous motion by itself attracts attention. 


Alexander R. Wade
Smith-Kettlewell Eye Research Institute

An fMRI Investigation of Coherent Flow Patterns 
Authors:  Wade, A.R., Norcia, A.M., Vildavski, V.Y., & Pettet, M. 

  Certain types of sparse dot patterns produce a strong percept of coherent
motion or flow.  The most obvious of these are true coherent motion stimuli
consisting of rapidly-updating, short lifetime dots but even static stimuli such
as Glass patterns or rapidly-refreshed Glass patterns (so-called 'dynamic Glass'
patterns) give an impression of oriented motion albeit without an ambiguous
direction.  Dynamic Glass patterns are particularly interesting as it has
recently been shown that they can interact with true motion stimuli both at a
behavioural and single-unit level.  In general, these stimuli are interesting as
they probe long-range integrative mechanisms and can be well-controlled.
  Using a novel stimulus paradigm containing three temporal scales (12Hz, 1Hz,
1/24Hz) we have investigated the cortical response to different flow stimuli
using fMRI and EEG.  The fMRI results are reported here.
  We find different patterns of activation to coherent motion, Glass and dynamic
Glass patterns.  Glass patterns activate two distinct cortical regions:  one
anterior and lateral to V3B on the dorsal surface, the other just lateral to V4
on the ventral surface.  Dynamic Glass patterns show a similar, but stronger
pattern of activation.  Coherent motion activates V3a and MT+.  Interestingly,
dynamic Glass patterns generate relatively little activation in MT+, despite
eliciting a strong percept of streaming motion.  However, there is some evidence
that a ventral subregion of MT+ is coactivated in the two conditions pointing to
a possible form-based input into the motion system.


Michael A. Webster
University of Nevada

Extrinsic Versus Intrinsic Adaptation
Authors:  Webster, M.A., Delahunt, P.B., Werner, J.S., & Kaping, D.

  Adaptation rapidly renormalizes visual sensitivity to match it to the stimuli
currently before us.  But what are the states of adaptation when the relevant
adapting stimuli are not present?  These states reflect more intrinsic
normalizations in the visual system and may depend on sampling the world over
much longer time scales.  We discuss two examples of these adaptive adjustments,
based on studies of changes in color perception (the achromatic locus) following
cataract surgery, and on changes in face perception (judgments of ethnicity)
following exposure to a new ethnic population.  Both cases point to a very slow
recalibration of perception over periods of months or more that may compensate
for changes in either the observer or the environment.  Adjustments in the
intrinsic neutral points are probably distinct from - and may often be masked
by - the more short-term, extrinsic adapting states that are induced by the
momentary presence of a specific stimulus.