2024 Program Abstracts

2024 Robert M. Boynton Lecture


Adaptation and visual experience   Michael Webster, Department of Psychology, University of Nevada at Reno

Processes of adaptation continuously regulate the responses characteristics of the visual system to match the ambient visual environment. These adjustments occur throughout the visual stream, and normalize neural coding both for properties of the observer and their environment. They thus profoundly affect the nature and content of visual awareness. I will illustrate how adaptation promotes shared perceptual experiences among observers despite optical and physiological differences, while leading to divergent percepts in observers immersed in different stimulus contexts. I will also explore the implications of adaptation for what the visual system can “know” about the world.


Invited session: Focusing on the Human Fovea 

The machinery of human vision is spatially inhomogeneous, with a neuronal sampling gradient that peaks near the line of sight and declines sharply with eccentricity. Despite the foveated nature of the visual system, our field of vision appears comparatively uniform in quality. This session will feature recent work that characterizes the structure of foveal pathways using high-resolution ophthalmoscopy and neuroimaging, and will highlight complementary behavioral studies that show how oculomotor control, attentional processing, and the integration of information across the central retina influence our subjective experience.


Plasticity and stability in human foveal pathways   Heidi Baseler, University of York, UK

Co-authors: Antony Morland, University of York, UK; Brian Wandell, Stanford University, USA; Michael Hoffmann, Otto-von-Guericke-University Magdeburg, Germany; Netta Levin, Hadassah Medical Center, Israel

The fovea is highly specialised in the human retina both structurally and functionally. Although it occupies a small fraction of the retina, a great deal of neural territory is devoted to processing its outputs downstream. What happens to these pathways when the fovea is compromised, and individuals rely more on peripheral vision? Examining structure and function in several different special populations, we will describe how human foveal pathways respond to changes in input both early and later in development.  

Funding acknowledgements:  European Union Horizon 2020, UKRI Medical Research Council 


Foveal sampling in space and time   Wolf Harmening, University of Bonn, Department of Ophthalmology, AOVISION Laboratory

Co-authors: Julius Ameln, University of Bonn, Department of Ophthalmology, AOVISION Laboratory; Veronika Lukyanova, University of Bonn, Department of Ophthalmology, AOVISION Laboratory; Jenny L. Witten, University of Bonn, Department of Ophthalmology, AOVISION Laboratory

Up to very recently, the spatial arrangement of the photoreceptors of the human foveal center and their relationship to vision remained uncharted territory. With adaptive optics photo stimulation techniques that overcome the optical blur of the human eye and at the same time allows precise tracking of each photoreceptor cell as it is actively moved across the image formed on the retina, we can see what the photoreceptors see, and psychophysically study structure-function relationships on foveolar cell level.  In this talk I will present recent anatomical and psychophysical results that will show both the similarities and differences in foveal mosaics in humans, and how the highly dynamic and adaptive sampling behavior of an eye aids visual performance. 

Funding acknowledgements:  Funded by the German Research Foundation (DFG, Ha 5323-5/1), the Dr. Eberhard and Hilde Rüdiger Stiftung (PUNKTBILD), and the Gertrud Kusen Stiftung (AO-DRIFT) 


Active vision at the foveolar scale: Insights from fixational oculomotor behavior and retinal anatomy   Martina Poletti, University of Rochester

Vision is an active process even at its finest scale in the 1-deg foveola, the visual system is primarily sensitive to changes in the visual input and it has been shown that fixational eye movements reformat the spatiotemporal flow to the retina in a way that is optimal for fine spatial vision. Using high-precision eye-tracking coupled with a system for gaze-contingent display capable of localizing the line of sight with arcminute precision, and an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) for high-resolution retinal imaging enabling retinal-contingent manipulations of the visual input, our results show that the need for active foveolar vision also stems from the non-uniformity of fine spatial vision across this region. Further, we show that the visual system is highly sensitive even to a small sub-foveolar loss of vision and fixation behavior is readjusted to compensate for this loss. Overall, the emerging picture is that of a highly non-homogenous foveolar vision characterized by a refined level of control of attention and fixational eye movements at this scale.

Funding acknowledgements:  NIH R01 EY029788-01


Perception in the foveal rod scotoma   Alexander C. Schütz, Marburg University

Resources for visual processing, such as the density of cone photoreceptors and the number of neurons in visual cortex, prioritize the fovea to maximize contrast sensitivity and visual acuity under photopic conditions at daylight. Consequently, the fovea does not contain any rod photoreceptors, which are saturated under photopic conditions, but allow for vision under scotopic conditions at dim lighting. Perception in this foveal rod scotoma is particularly interesting given the important role of foveal vision under photopic conditions. Our results show that there is perceptual completion of the foveal rod scotoma under scotopic conditions. Interestingly, humans trust that filled-in information more than veridical information from the peripheral visual field in a metacognitive confidence task. When flickering the background, a blurry counter-phase afterimage becomes visible in the fovea. This afterimage is most apparent at a flicker frequency of about 3 Hz and appears considerably larger than the rod-free zone. These results show that perceptual completion can occur even in the fovea and might not be limited to the absolute rod scotoma.

Funding acknowledgements:  This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101001250) and by “The Adaptive Mind”, funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art.


Invited session: Visual Prosthetics 

Amid challenges in commercializing retinal implant technology, scientific efforts are underway to learn from previous obstacles and spearhead the next wave of prosthetic vision innovations. This session will cover the current state and future directions in visual prosthetics, focusing on the creation of implants with large counts of flexible electrodes, and how their design and functionality may be informed by advancements in our understanding of device-neural tissue interactions and artificial intelligence. Insights from current clinical trials will provide a well-rounded view of the progress toward more effective visual prosthetic solutions.


The Neuralink Implant as a visual prosthesis   Dan Adams, Neuralink

Neuralink has developed a general purpose neural implant capable of recording from and stimulating the cerebral cortex. The implant connects wirelessly to an external computer to transmit neural signals and receive commands to evoke neural activity by electrical stimulation. Its first application is to enable effortless computer use for people with paralysis. I will describe the technological basis of the implant and present data from pre-clinical studies developing its potential to restore visual perception to the blind.


Artificial vision via high-channel-count visual cortical stimulation in primates   Xing Chen, University of Pittsburgh

Blindness affects 40 million people worldwide, and a neuroprosthesis may restore functional vision in the future. We developed a 1024-channel, chronically implantable prosthesis for the monkey visual cortex, using electrical stimulation to elicit percepts of dots of light (‘phosphenes’) across hundreds of electrodes. Phosphene locations matched the receptive fields of stimulated neurons, and V4 activity predicted phosphene detection during stimulation in V1. Next, we stimulated multiple electrodes simultaneously to generate percepts composed of multiple phosphenes. The monkeys could immediately recognize simple phosphene shapes, directions of motion, and letters. We developed techniques such as semi-automatic phosphene mapping and current thresholding, to expedite calibration of a prosthesis. Finally, we tested and validated several of our stimulation and calibration methods in blind human volunteers, demonstrating the potential of electrical stimulation to restore life-enhancing vision in the blind. 

Funding acknowledgements:  NWO (STW Grant Number P15-42 'NESTOR'; ALW Grant Number 823-02-010 and Cross-over Grant Number 17619 'INTENSE'). European Union (ERC Grant Numbers 339490 'Cortic_al_gorithms' and 101052963 'NUMEROUS,' H2020 Research and Innovation programme Grant Number 899287 'NeuraViper'). The Human Brain Project (Grant Number 650003). BrainLinks-BrainTools, Cluster of Excellence funded by the German Research Foundation (DFG, EXC 1086).


Bidirectional communication with the human visual brain: Towards an advanced cortical visual neuroprosthesis for the blind   Eduardo Fernandez, University Miguel Hernández, Elche, Spain

A long-held dream by scientists has been to directly transfer information to the visual cortex of blind individuals, to restore a rudimentary form of sight. However, in spite of all the progress in neuroelectronic interfaces, the biological and engineering problems for the success of cortical implants are much more complex than originally believed, and a clinical application has not yet been achieved. We will present our recent results regarding the implantation of intracortical microelectrodes in four blind volunteers (ClinicalTrials.gov identifier NCT02983370). Our findings demonstrate the safety and efficacy of chronic intracortical microstimulation via a large number of electrodes in humans, showing its high potential for restoring functional vision in the blind. The recorded neural activity and the stimulation parameters were stable over the whole experimental period, and multiple electrode stimulation evoked discriminable patterned perceptions that were retained over time. Moreover, there was a learning process that helped the subjects to recognize several simple and complex patterns. Additionally, our results show that we can accurately predict phosphene thresholds, brightness levels, and the number of perceived phosphenes from the recorded neural signals. These results highlight the potential for utilizing the neural activity of neighboring electrodes to accurately infer and control visual perceptions.


A user-centred design approach for improving object recognition in simulated phosphene vision   Yagmur Güçlütürk, Radboud University

Visual cortical implants are a promising neurotechnology designed to provide blind individuals with a basic form of visual perception, known as "phosphene vision." This perception occurs when implanted microelectrode arrays electrically stimulate specific areas of the brain, such as the primary visual cortex, resulting in the experience of phosphenes. In my lab, we focus on developing algorithms that process visual information to enable meaningful scene representation within the technological and biological constraints of these implants. We evaluate our algorithms through biologically plausible simulations with sighted adults, aiming to approximate the visual experiences of potential implant users. In this talk, I will present our latest findings, highlighting the significance of user-centred design in neurotechnology, with a particular emphasis on visual cortical implants. Additionally, I will demonstrate how a dynamic, gaze-controlled semantic segmentation approach for scene representation can enhance object recognition in phosphene vision.


Invited session: Machine Learning and AI Approaches to Retinal Diagnostics 

The human retina is a vascularized neural structure that is uniquely accessible to optical imaging. This means that large amounts of imaging data are available from clinical retinal scans and it is possible to use these data to teach machine learning models to diagnose disease. Our speakers will present the current state-of-the-art analysis of retinal imagining using AI/machine learning and discuss the broader ethical ramifications of this technology and potential future applications.


Optimizing clinician-AI teaming to enhance glaucoma care   Jithin Yohannan, Wilmer Eye Institute, Johns Hopkins University

We investigated how AI explanations help primary eye care providers differentiate between immediate and non-urgent referrals for glaucoma surgical care. We developed explainable AI algorithms to predict glaucoma surgery needs from routine eye care data to identify high-risk patients. We included intrinsic and post-hoc explainability and conducted an online study with optometrists to assess human-AI team performance, measuring referral accuracy, interaction with AI, agreement rates, task time, and user experience perceptions. AI support improved referral accuracy among 87 participants (59.9% with AI vs. 50.8% without), though Human-AI teams underperformed compared to AI alone - on a separate test set, our black-box and intrinsic models achieved 77% and 71% accuracy, respectively, in predicting surgical outcomes. Participants felt they used AI advice more with the intrinsic model, finding it more useful and promising. Without explanations, deviations from AI recommendations increased. AI support did not increase workload, confidence, and trust but reduced challenges.We identify opportunities for human-AI teaming in glaucoma management, noting that AI enhances referral accuracy but shows a performance gap compared to AI alone, even with explanations. Becasue, human involvement remains crucial in medical decision-making, highlighting the need for future research to optimize collaboration, ensuring positive experiences and safe AI use.

Funding acknowledgements:  5 K23 EY032204-04; Unrestricted grant from Research to Prevent Blindness; Brightfocus National Glaucoma Research Grant 825150, ARVO Epstein Award


Towards more robust AI models in ophthalmology   Adam M Dubis, University of Utah, Moran Eye Center

Co-authors: Mustafa Arikan, UCL Institute of Ophthalmology; James Willoughby, UCL Institute of Ophthalmology; Watjana Lilaonitkul, UCL Global Business School for health

Robust processes have been established to review and approve new treatments options within ophthalmic care, whether pharmaceuticals or medical devices. These processes are designed to ensure that any new medical product is safe, effective, and well-understood in terms of its functionality. In contrast, the rapid evolution of artificial intelligence (AI) has been heralded as a game-changer in healthcare, promising to transform patient care, doctor-patient interactions, and back-office functions. While AI's potential has been demonstrated by numerous groups worldwide, applying the same rigorous standards used for medical product approval reveals several areas where AI must improve. In this talk, I will focus on our group’s efforts to develop safe and robust AI models for various ophthalmology functions. Specifically, we will explore how using uncertainty can determine data value and enhance model robustness, both within individual models and against adversarial attacks. These strategies will be applied to tasks such as segmentation, classification, and object detection. One of the significant challenges in developing medical AI is dealing with imbalanced data, especially when identifying small objects. We will also review cutting-edge attention-based network features that can be developed to address these challenges and leverage known structures within retinal anatomy, particularly in object detection tasks.

Funding acknowledgements:  This research was supported by the NIHR Moorfields Biomedical Research Centre. This work was supported by National Institutes of Health Core Grant (EY014800), and an Unrestricted Grant from Research to Prevent Blindness, New York, NY, to the Department of Ophthalmology & Visual Sciences, University of Utah


Using AI to predict age, sex and disease from retinal OCT images   Anya Hurlbert, Newcastle University

Multiple factors - from normally varying characteristics including age and sex to various disease processes -  contribute to individual differences in how people see. In turn, these factors may be associated with subtle differences in the anatomy of the neural structures underpinning vision, from the eye to visual cortex.  We - the OCTAHEDRON project team - examine whether AI models can learn to predict individual characteristics and diagnose neurodegenerative diseases from such structural variations embedded in retinal OCT images.  The AI models are built from large datasets of annotated and unannotated OCT images, from northeast England NHS Hospital trusts, the UK Biobank and elsewhere. One model exploits a CNN-based retinal layer segmentation algorithm (NDD-SEG), designed to be robust across individuals, diseases and imaging instruments, to generate thickness maps feeding a further classification model which differentiates between individuals with and without multiple sclerosis,  achieving 97% balanced accuracy. Other results I will describe compare different techniques – CNN, transformer and traditional machine learning regression methods – to predict sex and age, and ultimately to differentiate between generally healthy and unhealthy ageing trajectories.


Using AI with small datasets of retinal images   Marinko V. Sarunic, University College London

Artificial intelligence (AI) training generally requires big datasets. This poses challenges for combination of AI with less common conditions, and processing of data acquired with bespoke instruments and new techniques. We present our progress on applying AI methods for retinal imaging of structure and function.

Funding acknowledgements:  Moorfields Eye Charity; NIHR BRC at Moorfields and UCL Institute of Ophthalmology


Invited session: The visual ecology of colour and light 

The human visual system has been moulded by the spatial and spectral properties of its environment. This session illustrates four examples of this interaction - showing how human visual processing is affected by changes in mean illumination and colour across the day, statistical regularities in spatiochromatic signals and our own activity within those environments.


How does melanopsin help us to see?   Annette Allen, University of Manchester

Environmental light intensity (irradiance) is a powerful regulator of physiology and behaviour. A stable neuronal representation of light intensity is grounded in a specialised retinal output channel, found in humans and other mammals, and arising from intrinsically photosensitive retinal ganglion cells (ipRGCs). These are a rare class of retinal ganglion cells with autonomous sensitivity to light, thanks to their expression of the photopigment melanopsin. Melanopsin photoreception is optimised to encode low-frequency changes in the light environment and, as a result, extends the temporal and spatial range over which light is detected by the retina. ipRGCs innervate many brain areas, and this allows melanopsin light responses to be used for diverse purposes, ranging from the synchronization of the circadian clock with the solar day to light's regulation of mood, alertness, and neuroendocrine and cognitive functions. There is now also abundant evidence that ipRGCs also make an important contribution to the processes of perceptual vision, via their projection to the visual thalamus. Here I will discuss ongoing research exploring how melanopsin extends the spatial and temporal range over which light is detected by the retina, and the role this plays in augmenting the detection of patterns in brightness.


Influences of the colour statistics of natural scenes on colour perception   Jenny M Bosten, University of Sussex

Exposure to the colour statistics of natural scenes can both induce and counteract individual differences in colour perception. If different people inhabit different chromatic environments, calibration of the visual system to the colour statistics of those environments can cause individual differences in colour perception. Conversely, exposure to common colour statistics in a common visual environment can reduce individual differences in colour perception that would otherwise be caused by individual differences in physiological factors such as macular pigment density, lens density and cone spectral sensitivities. I will present some examples of our research on these themes, including a cross-environmental study on colour perception between participants living in remote rural versus urban environments in Ecuador (Skelton et al. 2023, Proc. Roy. Soc. B), and studies that explore how the visual systems of anomalous trichromats compensate for their altered cone spectral sensitivities.

Funding acknowledgements:  ERC CoG 772193 COLOURMIND to Anna Franklin and ERC StG 949242 COLOURCODE to Jenny Bosten


The visual processing of animal warning signals   Julie Harris, University of St Andrews

In the natural world, some animals display highly specific patterns that are conserved across members of a species.  One form of patterning is thought to enable some animals (often toxic or unpalatable) to be easily seen, and possibly easily remembered, so as to warn off potential predators. These patterns, often high contrast in both colour and luminance, are known as warning signal patterns. I will review what is known about such warning signals in nature, and how they are studied. I will then describe research that uses modelling of the first stages of visual processing, combined with behavioural experiments using real predators, to demonstrate how warning signals might have specific effects on the brain that other patterns do not.


Contributed Talks I


Detecting and characterising microsaccades from AOSLO images of the photoreceptor mosaic using computer vision   Maria Villamil, University of Oxford 

Co-authors: Allie C. Schneider, University of Oxford ; Jiahe  Cui, University of Oxford ; Laura K. Young , Newcastle University;  Hannah E. Smithson, University of Oxford 

Fixational eye movements (FEMs), especially microsaccades (MS), are promising biomarkers of neurodegenerative disease. In vivo images of the photoreceptor mosaic acquired using an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) are systematically distorted by eye motion. Most methods to extract FEMs from AOSLO data rely on comparison to a motion-free reference, giving eye-position as a function of time. MS are subsequently identified using adaptive velocity thresholds (Engbert & Kliegl, 2003). We use computer vision and machine learning (ML) for detection and characterisation of MS directly from raw AOSLO images. For training and validation, we use Emulated Retinal Image CApture (ERICA), an open-source tool to generate synthetic AOSLO datasets of retinal images and ground-truth velocity profiles (Young & Smithson, 2021). To classify regions of AOSLO images that contain a MS, images were divided into a grid of 32-by-32-pixel sub-images. Predictions from rows of sub-images aligned with the fast-scan of the AOSLO were combined, giving 1ms resolution. Model performance was high (F1 scores >0.92) across plausible MS displacement magnitudes and angles, with most errors close to the velocity threshold for classification. Direct velocity predictions were also derived from regression ML models. We show that ML models can be systematically adapted for generalisation to real in vivo images, allowing characterisation of MS at much finer spatial scales than video-based eye-trackers.


Information integration in early visual processing revealed by Vernier thresholds   Mengxin Wang, Department of Experimental Psychology, University of Oxford

Co-authors: Daniel Read, School of Mathematics, University of Leeds; David H. Brainard, Department of Psychology, University of Pennsylvania; Hannah E. Smithson, Department of Experimental Psychology, University of Oxford

Vernier acuity thresholds represent minimal detectable spatial offset between two closely placed targets. We previously showed that Vernier thresholds for a Poisson-limited ideal observer with access to the cone excitations are determined jointly by duration and contrast through the quantity duration x contrast squared. Here we measured thresholds in 7 human observers for combinations of stimulus contrast (100%, 50%, 25%, and 12.5%) and duration (16.7 ms, 66.7 ms, 266.7 ms and 1066.7 ms), while fixing other stimulus properties (foveal viewing; two achromatic vertical bars; length 10.98 arcmin; width 4.39 arcmin; vertical gap 0.878 arcmin). The combinations of duration and contrast were chosen to form four groups of constant duration x contrast squared. Thresholds were a decreasing function of duration x contrast squared. A one-way between observers ANOVA does not reject the hypothesis threshold duration and contrast are integrated through the quantity duration x contrast squared, but the residuals obtained by predicting threshold within each of the four groups by its mean varied systematically with duration, indicating that duration x contrast squared does not fully summarize the information integration. This difference between ideal and human performance indicates that post-receptoral factors not included in the ideal observer model, such as temporal filtering, affect human performance. These factors will be included in future modeling.

Funding acknowledgements:  This work received a UKRI’s Physics of Life grant funded by the Engineering and Physical Sciences Research Council and the Wellcome Trust [grant code: EP/W023873/1].


A paradoxical misperception of relative motion at the fovea   Josephine C. D'Angelo, University of California, Berkeley

Co-authors: Pavan Tiruveedhula, University of California, Berkeley; Raymond J. Weber, Montana State University; David W. Arathorn, Montana State University; Austin Roorda, University of California, Berkeley

Images moving in a direction consistent with retinal slip appear stable even if that motion is amplified. This persists even in the presence of a world-fixed background, giving rise to a misperception of relative motion. This was previously explored 2° away from the line-of-sight. We asked: Does this phenomenon persist closer to the fovea? Would an image slipping with only a quarter of the retinal slip relative to a world-fixed image be perceived as stable? We implemented a novel method presenting: a fixation target and two circular images offset on either side horizontally by 0.43°, through an adaptive optics scanning light ophthalmoscope. The left image moved independently on a random walk and the right image moved contingent to retinal motion. Subjects adjusted the random walk image’s magnitude of motion until it appeared to match the retina-contingent image's motion, quantifying its perceived motion. We found a surprising discontinuity in the results: with background content present, images slipping consistent with the eye’s motion appeared stable, even if slipping with a quarter of the retinal slip, while images moving inconsistent with retinal motion appear to move. When all background content was removed, the perception of motion was entirely different. These results confirm that in the fovea, the visual system perceptually suppresses motion of images that move in directions consistent with retinal slip and that background content is crucial for this computation.

Funding acknowledgements:  NIH R01EY023591; NIH T32EY007043; Berkeley Center for Innovation in Vision and Optics


The role of fixational drift in the Vernier task   Fabian Coupette, School of Mathematics, University of Leeds

Co-authors: David H. Brainard, Department of Psychology, University of Pennsylvania; Hannah E. Smithson, Department of Experimental Psychology, University of Oxford; Daniel J. Read, School of Mathematics, University of Leeds

We develop a simple one-dimensional continuum model of the Vernier discrimination task to study the impact of Gaussian blur, fixational drift, receptor noise, and retinal adaptation on an ideal observer's Vernier performance. Two rectangular stimuli with a prescribed width and relative offset are subjected to a Gaussian blur. Fixational drift shifts the resulting signal with time. The perceived signal is the weighted average over the history of local stimulation encoded by an adaptation kernel. We model this kernel as a difference of two exponentials, introducing two timescales describing initial integration and eventual recovery of a receptor. Ultimately, Gaussian white noise is added to capture random receptor fluctuations. Based on the Bayesian estimation of location and relative offset of both stimuli, we can study Vernier performance through numerical simulation as well as through analytical approximation for different eye movements. Analyzing diffusive motion in particular, we extract the diffusion constant that optimizes stimulus localization for long observation times. This optimal diffusion constant is inversely proportional to an average of the two timescales describing adaptation and proportional to the square of the larger of stimulus size or blurring width, giving rise to two separate regimes. We generalize our analysis to optimize discrimination and extend the class of eye motions considered beyond purely diffusive drift, e.g. with the inclusion of persistence.


Fixational eye movements and retinal adaptation: optimizing drift to maximize information acquisition   Daniel J. Read, School of Mathematics, University of Leeds

Co-authors:  Alexander J. H. Houston, School of Mathematics & Statistics, University of Glasgow; Hannah E. Smithson, Department of Experimental Psychology, University of Oxford; David H. Brainard, Department of Psychology, University of Pennsylvania; Allie C. Hexley, Department of Experimental Psychology, University of Oxford; Mengxin Wang, Department of Experimental Psychology, University of Oxford

Fixational eye movements (FEMs) are small, fluctuating eye motions when fixating on a target. Given our visual system is evolved, we may ask why FEMs are beneficial and whether they are optimal. A possible reason for FEMs is overcoming retinal adaptation (fading perception of a fixed image). We present a simple model system allowing theoretical investigation of FEM influence on information about an external stimulus. The model incorporates temporal stimulus modulation, retinal image motion due to the drift component of FEMs, blurring due to optics and receptor size, uniform sampling by the receptor array, adaptation via a bandpass temporal filter, and added noise. We investigate how elements of the model mediate the information transmitted, via: i) mutual information between visual system response and external stimulus, ii) direct estimation of stimulus from the system response, and iii) contrast threshold for signal detection. For all these we find a common quantity that must be maximized. For each spatial frequency this quantity is a summed power transmitted due to stimulus temporal modulation and phase shifts from FEMs, when passed through the temporal filter. We demonstrate that the information transmitted can be increased by adding local persistence to an underlying diffusive process. We also quantify the contribution of FEMs to signal detection for targets of different size and duration; such predictions provide a qualitative account of human psychophysical performance.

Funding acknowledgements:  This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/W023873/1].


Recruiting native visual representations in visual cortex for electrode array based vision restoration   Ján Antolík, Computational Systems Neuroscience Group, Faculty of Mathematics and Physics, Charles University, Prague


Co-authors: Karolína Korvasová, Computational Systems Neuroscience Group, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic; Fabrizio Grani, Biomedical Neuroengineering Group, Miguel Hernandez University, Spain; Matěj Voldřich, Computational Systems Neuroscience Group, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic; Rocío López Peco, Biomedical Neuroengineering Group, Miguel Hernandez University, Spain; David Berling, Computational Systems Neuroscience Group, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic; Mikel Val Calvo, Biomedical Neuroengineering Group, Miguel Hernandez University, Spain; Alfonso Rodil Doblado, Biomedical Neuroengineering Group, Miguel Hernandez University, Spain; Tibor Rózsa, Computational Systems Neuroscience Group, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic; Cristina Soto   Sánchez, Biomedical Neuroengineering Group, Miguel Hernandez University, Spain; Xing Chen, Department of Ophthalmology, University of Pittsburgh School of Medicine, 203 Lothrop St, PA 15213,  Pittsburgh, USA; Eduardo Fernandez, Biomedical Neuroengineering Group, Miguel Hernandez University, Spain; The possibility to recruit native functional representations, such as orientation preference, by external stimulation in the visual cortex could greatly advance the field of visual  prosthetics. However, in blind humans functional properties of neurons cannot be tested directly by measuring neural responses to visual input. A possible solution is based on the idea that functionally similar neurons tend to be more correlated also in the resting condition. Here we present a method to infer the orientation preference map from spontaneous activity recorded with a Utah array from the primary visual cortex of non-human primates. We validated this methods first in a detailed model of primary visual cortex, and subsequently on Utah array recordings from macaque V1. Finally, we applied this method to recordings from blind human volunteers implanted with a cortical visual prosthesis and found that both spatial and functional properties of the set of stimulated electrodes affect perception. Particularly, discrimination between two stimuli becomes easier the more spatially and functionally separated the two sets of stimulated sites are, demonstrating the functional relevance of the decoded visual representations on electrically evoked human perception.

Funding acknowledgements:  This work was supported through institutional funding from Charles University (project PRIMUS/20/MED/006) and through ERDF-Project Brain dynamics (CZ.02.01.01/00/22_008/0004643).


Contributed Talks II


Characterizing terrestrial illumination: Spectral, angular, spatial, and temporal variability   Cehao Yu, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region, China

Co-authors: Sylvia Pont, Perceptual Intelligence Laboratory, Faculty of Industrial Design Engineering, Delft University of Technology, The Netherlands; Anya Hurlbert, Centre for Transformative Neuroscience, Biosciences Institute, Newcastle University, United Kingdom

Terrestrial illumination undergoes continuous spectral, angular, spatial, and temporal changes throughout the day, influenced by diurnal cycles and atmospheric conditions such as haze. These variations in light exposure impact human physiology and behavior, particularly in providing "zeitgebers" (time givers) for biological rhythms. We analyzed spectral light-field data collected outdoors from dawn to dusk on four days: two in Delft (sunny and cloudy) and two overcast (uniformly cloud covered) days in Newcastle. By decomposing the light field into diffuse and directional components, we identified differences in spectral composition between these for all conditions, with overcast days showing reduced variability due to increased light scattering. Our study also explored the physiological implications for circadian regulation via melanopsin and other photoreceptors. We found that α-opic illumination vectors varied with weather, their order aligning with the sequence of photoreceptor spectral peak sensitivities—from S cones, to ipRGCs, rods, M cones, and L cones—especially under sunny and cloudy skies.  Analysis of hazy versus clear images revealed that haze shifts chromaticity towards blue, potentially enhancing melanopic efficiency. Although these fluctuations are large, it is plausible that they do not impact biological rhythms to the same extent as illumination variations at dawn and dusk, ensuring that the latter remain the primary drivers of circadian rhythm regulation.


The caerulean line: Its relationship to the red-green category boundary of deuteranomalous observers   John Mollon, Department of Psychology, Cambridge University

Co-authors: Hugo Smith, Department of Psychology, Cambridge University; Marina Danilova, Department of Psychology, Cambridge University

At any one time and place, natural illuminants – skylight, sunlight, and their mixtures – lie on a straight line in a chromaticity diagram.  We have termed this locus the ‘caerulean line’.  In the MacLeod-Boynton diagram, it has a negative slope and is not aligned with either of the ordinates of the diagram. Three properties of normal colour perception exhibit a provocative (though approximate) alignment with the caerulean line: 1. The phenomenological category boundary between reddish and greenish hues. 2. The orientation of the discrimination ellipse at a neutral chromaticity (e.g. Boynton et al 1983). 3. The locus of minimal thresholds when chromatic thresholds are measured along +45° lines in the chromaticity diagram. These coincidences may suggest that human colour vision has evolved so that redness and greenness indicate departures from the caerulean line in opposite directions.  Anomalous trichromats offer a means to explore the causal relationships (if any) between the caerulean line and (1) to (3) above.  In deuteranomalous observers with good discrimination, we have measured the locus of the red-green category boundary; and the discrimination ellipse centred on a metamer (for the deuteranomal) of Illuminant D65. For most anomalous observers, the discrimination ellipse at the neutral point is not aligned with the caerulean line.  However, their phenomenological category boundary between reddish and greenish hues does fall close to the caerulean line. 


Environmental calibration of perceived white   Daniel Garside, School of Psychology, University of Sussex, UK

Co-authors: John  Maule, School of Psychology, University of Sussex, UK; Alice Skelton, School of Psychology, University of Sussex, UK; Shoaib Nabil, School of Psychology, University of Sussex, UK, and Department of Psychology, University of Oslo, Norway; Sarjo Kuyateh, Department of Psychology, UIT, The Arctic University of Norway, Norway; Almina Selimovic, Department of Psychology, UIT, The Arctic University of Norway, Norway; Amanda Lindberg, Department of Psychology, UIT, The Arctic University of Norway, Norway; Mahdis Jafari, Department of Psychology, UIT, The Arctic University of Norway, Norway; Mikolaj Hernik, Department of Psychology, UIT, The Arctic University of Norway, Norway; Bruno Laeng, Department of Psychology, University of Oslo, Norway; Jenny Bosten, School of Psychology, University of Sussex, UK; Anna Franklin, School of Psychology, University of Sussex, UK

It has been proposed that colour perception is calibrated to the chromatic statistics of the environment. Here we investigate whether perceived white is calibrated to the chromatic statistics of the local current ‘visual diet’.  We compare achromatic settings for participants in Norway living above (Tromsø, N = 165) or below (Oslo, N = 158) the Arctic Circle and across seasons. To capture the local visual diets we used images from colour-calibrated head-mounted cameras worn during daily life. For each image we computed average chromaticity (L/(L+M) and S/(L+M)) and the amount of blue-yellow bias in the distribution of chromaticities. We find that perceived white is warmer (higher L/(L+M) and lower S/(L+M)), and more blue-yellow biased for observers living in Oslo compared to Tromsø. However, visual diets were warmer and more blue-yellow biased in Tromsø compared to Oslo. Perceived white did not vary significantly with season, yet visual diets were warmest in the winter. In order to explore the effects of visual environment in early life, we also investigate how perceived white varies with latitude of birth and season of birth for participants living in Tromsø.  Perceived white was lower in S/(L+M) (yellower) for adults born below the Arctic Circle than adults born above, and was higher in L/(L+M) (redder) for adults born in the summer.  Combined, the findings suggest a possible link between colour perception and visual diet, and we discuss potential mechanisms.

Funding acknowledgements:  The work was funded by a European Research Council grant (ref 772193 – COLOURMIND) awarded to A.F.


Sensitivity to temporally modulated defocus for different monochromatic stimuli   Victor Rodriguez-Lopez, Institute of Optics, Spanish National Research Council (IO-CSIC), Madrid, Spain

Co-author: Carlos Dorronsoro, Institute of Optics, Spanish National Research Council (IO-CSIC), Madrid, Spain

Programmable lenses allow for the measurement of quick changes in optical power. In a previous study we reported the temporal defocus sensitivity function, which measures the just noticeable defocus change for different temporal frequencies for an achromatic stimulus. Here we extend this measurement to different colors. A tunable lens (Optotune) was used to induce temporal defocus variations. An AMOLED display (Waveshare) was used to measure red, green, and blue stimuli with narrow (spectral width at half of 35nm) and separated components (625, 530, 460nm). An achromatic (white on black) stimulus was also measured. All colors were equiluminant. A staircase procedure was used to find the just noticeable defocus change for different temporal frequencies (from 0.5 to 35Hz) using a 4cpd Gabor patch in 3 young subjects. The just noticeable defocus for all temporal frequencies were fit to a model of temporal defocus perception to determine the maximum sensitivity and the defocus critical fusion frequency (DCFF) for each color component. Averaged across subjects, the maximum sensitivity was below 0.1D for each color, with little differences among them. However, the DCFF was lower for blue (22 Hz) and red (26 Hz) stimuli than for green and white (35 and 36Hz, respectively). Red and blue stimuli and white and green stimuli were highly correlated (r2=0.71 and 0.99, respectively). These results are relevant for emerging technologies that make use of temporal changes in optical power.

Funding acknowledgements:  La Caixa Foundation LCF/TR/CI22/52660002 to VRL and CD.


Differential effects of chromatic and luminance flicker on detection of large versus small visual stimuli   J.T. Pirog, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA, USA

Co-author: William S. Tuten, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA, USA

Previous studies have shown that detectability of large spots (≥ 1°) can be reduced following exposure to temporal contrast modulation. We examined detection thresholds of small, diffraction-limited flashes following flicker presented in various spatiochromatic configurations with an adaptive optics scanning light ophthalmoscope. Circular, 543 nm increments of varying diameter (1, 3, or 23 arcmin) were presented to the fovea for on a 1.4° background following presentation of isoluminant (i.e., red/green) or isochromatic (i.e., orange/black) checkerboards flickering at 3.75 Hz. Stimulus onset asynchrony (SOA; relative to flicker offset) varied between 33 and 500 ms, and detection thresholds were obtained at each SOA using an adaptive staircase procedure. Compared to control measurements, sensitivity to the 23 arcmin stimulus was reduced following both chromatic and luminance flicker. The magnitude of flicker-induced sensitivity loss decreased as SOA increased, reaching a plateau at ~250 ms at a level ~0.3 log units below control sensitivity. These results suggest that larger spots engage mechanisms susceptible to both short-term and sustained desensitization processes. In comparison, the detectability of the 1 and 3 arcmin stimuli was largely unaffected by exposure to flicker of either type, implying small spots engage a more complex confluence of pathways in processing. Further investigation is necessary to untangle the pathway(s) mediating detection of small spots.

Funding acknowledgements:  National Institute of Health: R01EY023591, T32EY007043; Air Force Office of Scientific Research: FA9550-21-1-0230, FA9550-20-1-0195; Hellman Fellows Program; Alcon Research Institute


Colour-selective regions of visual cortex are responsive to the colour statistics of objects   Ian Pennock, University of Sussex

Co-authors: John Maule, University of Sussex; Chris Racey, University of Sussex; Teresa Tang, University of Sussex; Yasmin Richter, University of Sussex; Chris Bird, University of Sussex; Jenny M. Bosten, University of Sussex; Anna Franklin, University of Sussex

It has been suggested that objects are more likely to be warmer in colour, redder and more saturated than the background. Here, we investigate the colour statistics of objects, and the brain regions that are responsive to these statistics. First, we analysed the Natural Scenes Dataset (NSD), a 7T dataset in which 8 participants viewed up to 10,000 natural scenes. Our analysis of the chromaticities of the 80 segmented object classes and backgrounds confirmed that object pixels were warmer, redder, more saturated and darker than background pixels. The probability that pixels were from objects rather than backgrounds (the 'Object Colour Probability', OCP) was calculated for 240 hue bins. The mean OCP of images correlated with NSD BOLD responses mostly in the ventral visual pathway. Other image statistics (e.g., number of food pixels) better explained the responses of correlated voxels. A second fMRI study, in which colours were shown as a single patch on a grey background, was analysed to study whether ventral visual pathway is responsive to OCP in the absence of other scene statistics. To constrain our analyses to functionally relevant areas, we used independent functional localizers to identify colour- and object-selective areas and combined these with NSD defined OCP responsive areas. The OCP of the colour patches significantly correlated with BOLD in colour-selective but not object-selective visual regions. Implications for the role of colour in object vision are discussed.

Funding acknowledgements:  The work was funded by ERC grant 772193 COLOURMIND to AF and ERC grant 949242 COLOURCODE to JB.


Contributed Talks III


The Berkeley Widefield Model Eye   Austin Roorda, University of California, Berkeley

Co-authors: Pavan Tiruveedhula, University of California, Berkeley; Gareth Dudley Hastings, University of California, Berkeley

Model eyes are a valuable tool for vision science but in many cases model eyes are designed to be useful for one application (e.g. chromatic aberration) and are often not useful for another (e.g. predicting off-axis visual performance). Moreover, a single model eye cannot capture the variability in the population. The Berkeley widefield model eye (Hastings et al, Journal of Vision, https://doi.org/10.1167/jov.24.7.9) is designed to address some of these limitations (i) by being anatomically plausible (it uses 27 unique biometric parameters to define each eye) and (ii) by not being a single eye, but a collection of 28 emmetropic (−0.50 to +0.50 D) and 20 myopic (-1.5D to - 4.5D) eyes. Using a set of eye models might initially seem daunting for researchers, so this presentation will describe the models and demonstrate ways in which cohorts of model eyes can be used effectively for research purposes. The demonstration will focus on a concentric multizone lens design that aims to slow progression of myopia. It will include an analysis of on and off-axis performance, changes in optical quality with pupil size, and comparisons of optical performance across a range of myopic eyes. A primary outcome of the analysis reveals that concentric multizone lenses generate complex point spread functions under many conditions and that the notion that these lens designs affect defocus in the periphery is too simplistic. 

Funding acknowledgements:  Berkeley Center for Innovation in Vision and Optics


Infants' eye movements to scene statistics in natural behavior   T Rowan Candy, Indiana University

Co-authors: Zachary Petroff, Indiana University; Stephanie Biehn, Indiana University; Sarah Freeman, Indiana University; Kathryn Bonnen, Indiana University; Linda Smith, Indiana University

Infants start to interact with their visual environment during the first postnatal months. Immaturities in gross motor responses and spatial vision constrain their visual behavior during this rapid development.  Analyses of first-person video and eye-tracking data from infants were performed to understand key components of visual experience during this period of visual learning.  Methods: Infants wore head-mounted scene and binocular eye-tracking cameras (modified Pupil Labs Core) while engaging in naturalistic behavior in an 8ftx8ft home-like environment.  Calibrated eye movements were identified using standard approaches (e.g. Engbert & Mergenthaler, 2006) and image statistics were extracted at fixation locations (>200ms).  Results: Recordings (10.5 hours) at ages 2-3 (n=24) 5-6 (35) 8-9 (27) & 11-12 (11) months were analyzed.  Eye position and saccade amplitude distributions relative to the head were tighter for younger infants.  The distribution of RMS contrast around fixation was also highest at younger ages. Conclusions:  The youngest infants with limited head and trunk control exhibited the most restricted range of eye movements, suggesting no gaze shift compensation for limited mobility.  This likely leads to less active sampling of the scene, slower rates of change in input, and a tight link between head- and eye-centered frames of reference.  Early experience also provides a concentration of contrast serving the development of foveal and parafoveal function.

Funding acknowledgements:  NIH-NEI R01EY032897


Surface color information gained in searching natural scenes   David Howard Foster, University of Manchester, UK

Co-author: Kinjiro Amano, University of Manchester, UK

Natural scenes are often spatially and spectrally complex, containing fields, woodland, grasses, ferns, and flowers, as well as buildings and treated surfaces. How we search such scenes may not be readily explained by notions of object salience. Instead, gaze behavior may satisfy a simpler requirement: to gain information about a scene’s distinct reflecting elements, characterized by their reflected color. This idea was tested by estimating the information from seven observers’ eye fixations recorded while searching natural outdoor scenes for a small neutral target. Scenes were rendered on a computer-controlled display and appeared repeatedly in successive trials of 1 s each. Information was estimated numerically between reflected spectral radiances at the pooled fixation positions and the corresponding excitations in long-, medium-, and short-wavelength-sensitive cone photoreceptors. These estimates were then compared with those from the same number of sample points distributed randomly across each scene. In 16 scenes out of the 20 tested, fixations delivered more information than random sampling, the proportion depending little on photoreceptor noise. The information gain in each scene covaried with color entropy, a frequency-weighted measure of spectral diversity. Although information gain may not account fully for search behavior, it could offer a basis, where appropriate, for more object-oriented explanations.

Funding acknowledgements:  Leverhulme Trust (RPG-2022-266), EPSRC (EP/W033968/1)


Population receptive field sizes in primary visual cortex depend on luminance and color direction   Antony B Morland, University of York

Co-authors: Rebecca Lowndes, University of York; Rebecca Willerton, University of York; Prabhat Changlani, University of York; Lauren Welbourne, University of York; Barbara Molz, University of York; Heidi Baseler, Hull-York Medical School

Population receptive field (pRF) modeling of BOLD responses to bars that traverse the visual field is a technique that captures retinotopic representations of the visual field in the human brain.  The technique also yields a measure of the pRF size, which when plotted against the pRF model parameter of eccentricity follows a lawful relationship within visual field representations and consistent differences between visual field representations.  Previously, we asked whether the relationship between pRF size and eccentricity changed as a result of presenting bars that were defined by canonical color directions.  We found that luminance, L-M and S cone stimuli resulted in size vs eccentricity plots that were very similar to each other.  Here we revisit whether pRF size varies with the luminance and color of the visual stimuli. We first asked if luminance levels selected to drive either rods or cones influenced pRF size.  In a V1 region of interest representing 3.5-6.5° we found pRF size was larger for rod than cone conditions.  We next extracted pRF size for responses to luminance, L-M and S cone contrast stimuli, which were presented on a randomly luminance contrast modulated background (to minimize luminance artefacts). In a V1 ROI representing 2-4° pRF size was larger under S-cone than luminance and L-M conditions.   Our results show that pRF sizes in V1 can vary with stimulus characteristics in a largely predictable way.

Funding acknowledgements:  BBSRC, European Commission


Pulse trains to percepts: A virtual patient describing the perceptual effects of human visual cortical stimulation   Ione Fine, University of Washington / University of Leeds

Co-authors: Geoffrey Michael Boynton

Here we describe how computational models or 'virtual patients', based on the neurophysiological architecture of V1, can be used to predict the perceptual experience of cortical implant patients. Our virtual patient model can successfully describe psychophysical data from a wide range of previously published studies describing the location, size, brightness and spatiotemporal shape of electrically induced percepts in humans. Our simulations suggest that, in the foreseeable future, the perceptual quality of cortical prosthetic devices is likely to be limited by the neurophysiological organization of the visual cortex, rather than the size and spacing of electrodes.

Funding acknowledgements:  Supported by National Institutes of Health (OER & NEI) R01EY014645 (IF). National Institutes of Health (NEI) R01EY12925 (GMB).


At a glance - Facilitated identification of abnormal representations in the visual cortex with fMRI micro-probing   Michael Hoffmann, Magdeburg University Germany

Co-authors: Khaldoon Al-Nosairy, Magdeburg University Germany; Elisabeth Quanz, Magdeburg University Germany; Carvalho Joana, Champalimaud Centre Lisbon Portugal; Frans Cornelissen, University of Groningen Netherlands

The integrity of visual function is based on the perfect match of retinal and post-retinal processing. Consequently, current therapeutic initiatives to restore the retinal input to the human visual system also require advanced functional imaging of the visual cortex. For this purpose, fMRI-based "micro-probing" [1] is of particular promise to visualize of population receptive field (pRF) abnormalities without a priori assumptions about their structure. This is here demonstrated for cortical representation abnormalities in a participant born without optic chiasm (achiasma), for whom fMRI-based pRF-mapping of the visual cortex at 3 Tesla was performed and micro-probing-based pRF structures were visualized as back projections into the visual field. Conventional analysis methods result in cortical maps ipsilateral to the stimulated eye that are difficult to interpret and that allow for predictions of the cortical organization only with specific a priori assumptions. Micro-probing, on the other hand, reveals "at a glance" abnormal, systematically fragmented pRFs that are mirror-symmetric along the vertical meridian (symmetry coefficient achiasma > controls (0.30 vs 0.03); p<0.001)). The micro-probing results directly demonstrate the predictions of previous achiasma studies and thus bear great potential of fMRI-based micro-probing to identify pathology-related, unknown representation abnormalities of the visual cortex. [1] Carvalho et al. (2020) Neuroimage 209:116423

Funding acknowledgements:  Supported by the German Research Foundation


Poster session

All posters remain up for the duration of the conference. Odd-numbered posters present on Friday (14:30-18:00); even-numbered posters present on Saturday (11:30-13:30).

(01) Using OPM-MEG to study the timecourse of human contrast discrimination   Abbie Lawton, University of York

Co-authors: Richard Aveyard, University of York; Alex Wade, University of York; Ben Clayden, University of York; Stephen Robinson, University of York

The International Brain Laboratory (IBL) is a large-scale project collecting multiunit measurements from the mouse brain during a simple 2AFC perceptual decision task. The goal is to characterise the flow of information across the brain from sensory input areas through to motor outputs, and the way that this information flow can be modulated by priors. Our lab is translating the IBL task to humans using a combination of psychophysics and neuroimaging. Here we describe the results from a pilot study using a novel type of neuroimaging (OPM-MEG). We first describe the adaptations necessary to alter the original rodent task to make it appropriate for human subjects. We then present behavioral and neuroimaging data obtained using this modified paradigm.  Human psychophysical responses recapitulate key features of the rodent behavioural data - including the effect of perceptual priors or ‘bias’. Psychophysical response functions have the same form and bias dependency as those obtained from mice. Using the MEG data we are able to decode key features of the IBL paradigm including visual stimuli, responses, bias blocks and feedback in a time-resolved manner. We show that  OPM-MEG responses are consistent with fMRI responses obtained in our lab using the same paradigm. We conclude that multimodal neuroimaging techniques (OPM-MEG and fMRI) can be applied to the IBL task allowing us to relate neuronal-level recordings in rodents with whole-brain population responses in humans.

Funding acknowledgements:  Rank Prize 


(02) Measuring the limits of long-term adaptation to hue-rotated altered reality   Yesesvi Somayaji Konakanchi, University of Sussex

Co-authors: Jenny Bosten, University of Sussex; Anna Franklin, University of Sussex; John Maule, University of Sussex

The visual system adapts to chromatic changes by altering its sensitivity to the environment. Research on chromatic adaptation under natural viewing has exposed observers to simple transformations using filters or lenses (Neitz et al.,2002, Engel et al., 2016). It is an open question whether our visual system adapts to complex chromatic transformations as it does for complex visuospatial manipulations (Richter et al., 2002). Altered reality (AR) devices allow us to address this question. Grush et al., (2015) provided qualitative evidence from two observers that their perceptual experience of colour was altered after AR exposure to a hue-rotated world.  We exposed observers to a real-time pass-through feed in an AR headset (Meta Quest 3), applying a hue rotation of 120 degrees in HSL space (e.g., blue sky turns magenta). Observers (N = 8) were immersed in either a positive or negative hue rotation during which they interacted with the unnaturally coloured world through everyday activities including walking in nature, painting, selecting and eating food, etc. We measured the observers’ perception of unique yellow and unique blue prior to adaptation and each hour during their exposure to the hue-rotated world, for up to four hours. Our results expose the limits of visual adaptability, elucidating whether adaptation extends to unnatural chromatic transformations and attempt to shed light on the temporal characteristics of long-term adaptation.

Funding acknowledgements:  Funding from the European Research Council (ERC) under the Horizon 2020 research and innovation programme (Project COLOURMIND: Grant agreement No. 772193, to A.F.) and a University of Sussex PhD studentship to Y.K.


(03) Development of a natural wideview 3D scene fMRI dataset for modeling human spatial cognition   Joseph Obriot, Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology

Co-authors: Pei-Yin Chen, Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology; Atsushi Wada, Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology

Recent research shows that deep neural networks (DNNs) trained for object recognition can predict neural responses to natural stimuli with unprecedented accuracy, serving as computational models of hierarchical visual processing along the ventral visual stream. Several functional brain datasets have been published to facilitate this DNN modeling approach, which compile neural responses to large-scale natural image datasets. However, their application in spatial cognitive processing, especially within the dorsal visual stream, remains underexplored. Here, we propose a novel dataset that combines fMRI and wideview stereoscopic presentation of natural 3D scenes, which reflects conditions known to facilitate spatial cognitive functions. The stimuli consisted of movie clips of indoor 3D scenes with 3D observer motion, generated using Habitat-Sim, a real-world simulator for training embodied AIs. To preserve geometrical accuracy in spatial 3D structure, the viewing angle and participant-wise interpupillary distance were set identically between rendering and presentation. Training and test data were acquired in separate scanning runs, each presenting the scene movie clips continuously. Preliminary results show voxels with high explainable variance across both ventral and dorsal visual cortical areas extending to the far periphery, indicating the potential of the dataset for quantitative and high-dimensional modeling of visuo-spatial processing involved in human spatial cognition.

Funding acknowledgements:  This research is supported by JSPS Kakenhi grants 21H04896


(04) Visual discomfort in the everyday environment   John Maule, Statistical Perception Lab, University of Sussex

Co-authors: Anzonia Farrant, Statistical Perception Lab, University of Sussex; Clare Davis, Statistical Perception Lab, University of Sussex

Visual discomfort describes an aversive subjective experience characterised by perceptual distortions, blurred vision, diplopia, pain in the eyes, headache and/or nausea. Previous laboratory studies have found that levels of visual discomfort can be predicted from some statistical properties of scenes, including the spectral slope (e.g. Penacchio & Wilkins, 2015) and colour contrast (Juricevic et al. 2010; Penacchio et al. 2021). Lighting flicker (e.g. Yoshimoto et al., 2017) and colour temperature (e.g. Kakitsuba 2015) have also been shown to trigger discomfort. We investigated everyday occurrences of visual discomfort using a visual survey method. Participants (N = 36) captured scenes which they found to be visually uncomfortable within a university library. Participants gave a narrative and discomfort rating for each image. We also surveyed the lighting in the photographed areas, measuring flicker and the spectral power distribution of the illumination. Analysis showed some support for the importance of the image statistical features identified in laboratory studies, although there was no interaction between these features and lighting flicker or colour temperature. Qualitative analysis of participant narratives revealed that experiences of discomfort were attributed to low-level features (e.g. pattern, contrast), but also structural features (e.g. depth, disorganisation). These results provide new insights into the causes of visual discomfort in the everyday environment.

Funding acknowledgements:  Part funded by the Rank Prize


(05) Characterising Visual Evoked Potential contrast response functions using achromatic and chromatic Gabor patches   Joel Martin, The University of Edinburgh

Co-authors: Zoe Darrasse, University Paul Sabatier Toulouse III; Jasna Martinovic, The University of Edinburgh

The human visual system processes chromatic and achromatic information by adding or contrasting signals from short (S), medium (M) and long-wavelength (L) cones, through cone-additive (L+M) or cone-opponent (L-M, S-(L+M)) mechanisms. Multiple studies have shown that Visual Evoked Potentials (VEPs) to chromatic and achromatic sinusoidal gratings differ with respect to morphology. Here we characterise VEP contrast response functions to chromatic and achromatic 0.8 cycles per degree Gabors in a relatively large sample of participants (n=27). Participants view chromatic (set at individual isoluminance) and achromatic Gabors at four different contrast levels while electroencephalograms are recorded. Detection thresholds and salience matching data are also collected to establish the range of individual differences. We replicate the findings of previous normative studies: achromatic-driven pattern-onset VEPs are characterised by a robust P1 component that saturates at higher levels of contrast, while chromatic Gabors elicit a single negative deflection whose amplitude and latency depend on stimulus contrast more linearly. The data we present demonstrate the suitability of Gabors for establishing VEP-derived contrast gain functions and represent the beginnings of a comprehensive normative data set that will eventually be used for control comparisons in a large-scale study on visual function in individuals with bipolar disorder.

Funding acknowledgements:  Wellcome [226787]


(06) Assessing the relationship between central visual field loss, physical activity, and cognitive function   Holly D H Brown, Centre for Cognition and Neuroscience, School of Human and Health Sciences, University of Huddersfield, UK

Co-authors: Eleanor J Hoyle, Centre for Cognition and Neuroscience, School of Human and Health Sciences, University of Huddersfield, UK; Leah G Kelly, Department of Psychology, University of York, UK; Catherine P Agathos, The Smith-Kettlewell Eye Research Institute, San Francisco, California, United States; Natela M Shanidze, The Smith-Kettlewell Eye Research Institute, San Francisco, California, United States; Heidi A Baseler, Hull York Medical School, University of York, UK

Loss of central vision affects a variety of activities of daily living, limiting high acuity tasks like reading and increasing isolation due to loss of mobility and decreases in physical and social activity. These outcomes are known to affect healthy aging and can be associated with accelerated cognitive decline. Here, we explore how cognitive and physical changes in central vision loss compare with sighted controls. Participants with macular-affecting pathologies (MAP) and age-matched sighted controls were recruited in both the USA and UK. Cognitive function was assessed using an adapted version of the Montreal Cognitive Assessment validated for the visually impaired, the MoCA-Blind. Physical and lifestyle activity levels were evaluated using several measures, including the Timed Up-and-Go functional balance instrument and/or the augmented Victoria Longitudinal Study (aVLS) activities questionnaire. Information about the nature and extent of visual impairment was also collected. Preliminary findings reveal a complex relationship between these variables; visual status (MAP vs sighted control) and physical and lifestyle activity levels - as assessed by the aVLS questionnaire - predicted MoCA-Blind scores, with the MAP group scoring significantly lower on aVLS activity measures. Interestingly, MoCA-Blind scores were not predicted by the Timed Up-and-Go test after controlling for age. 


(07) Chromatic blur detection differences as a function of refractive error   Maria Vinas-Pena, Institute of Optics, Spanish National Research Council (IO-CSIC), Madrid, Spain

Co-authors: Paulina Dotor-Goytia, Institute of Optics, Spanish National Research Council (IO-CSIC), Madrid, Spain; Elena Moreno, Institute of Optics, Spanish National Research Council (IO-CSIC), Madrid, Spain; Victor Rodriguez-Lopez, NA; NA Institute of Optics, Spanish National Research Council (IO-CSIC), Madrid, Spain, NA

Visual information modulates eye growth. The retina integrates optical signals of opposite sign, recognizes it and generates molecular signals specific to its sign, stimulating or inhibiting eye growth, via scleral remodeling. However,  the optical cues that modulate eye growth (i.e., the visual input from the outside world) are rarely well-defined homogenous patches of light or dark but typically a mixture of large- and small-scale structures that interact with the dynamics of visual function (i.e., accommodation, neural adaptation, among others). Recently, the perception of contrast has emerged as a possible cue for emmetropization because optical defocus leads to a proportional degradation of contrast at the edges of the images, whereby the retina would use the contrast of the edges to determine the focal plane, and color contrast to identify the sign of defocus. Moreover, detection mechanisms of the visual system are modified after prolonged exposure to a degraded stimulus, with the role of native aberrations being increased in early-onset myopes, suggesting potential differences in neural sensitivity to blur, and therefore in defocus detection (of different signs and wavelengths). The aim of this study is to investigate the perception of combined optical cues using an Adaptive Optics visual simulator to shed light on the underlying stimulus detection mechanisms guiding eye growth.

Funding acknowledgements:  This research has received funding from the Optica Foundation under the Optica Women Scholars (2022) program to EM; La Caixa Foundation LCF/TR/CI22/52660002 to VRL and PD; Spanish Research Agency grant CPP2021-008388D to VRL; Spanish National Research Agency (Spanish Government) under the Ramón y Cajal (RYC2021-034218-I), the Consolidacion2022 (CNS2022-135326), and the PID2022 (PID2022-139840OA-I00) programs to MV.


(08) In search of Attention Restoration: does the statistical stability of natural images support enhanced visual cognition?   Shoaib Nabil, Statistical Perception Lab, University of Sussex

Co-author: John Maule, Statistical Perception Lab, University of Sussex

Stable visual ensemble statistics can support performance on a visual search task (e.g. Corbett and Melcher, 2014). Such results may indicate improvements in the efficiency of visual cognition in response to a more predictable environment. Observations of improved cognition when immersed in nature (e.g. Berman et al., 2008) have been related to concepts of perceptual fluency and Attention Restoration. We investigated whether the statistical stability of natural scenes could be the underlying mechanism supporting enhancements in visual cognition related to natural images. In experiment 1, we replicated the first study from Corbett and Melcher (2014), showing that sequences of trials with a stable mean size of Gabor elements results in enhanced visual search for an orientation singleton target, compared to sequences with an unstable mean size. In experiment 2, we embedded visual search targets within a set of natural scene images. We leveraged existing variation in the image statistics between images to present sequences where the slope of the Fourier amplitude spectrum was relatively stable (gradually increasing/decreasing) or unstable (randomly ordered). The results have implications for our understanding of the effect of the visual environment on visuo-cognitive functions and the extraction of image statistics by the visual system. We discuss the likely role for eye movements in sampling natural scenes.

Funding acknowledgements:  Funded by the School of Psychology, University of Sussex


(09) Leveraging AI to classify sex based on fovea shape features   Knectt Lendoye, Newcastle University

Co-authors: Raheleh Kafieh, Durham University; David Steel, Newcastle University; Christian Taylor, Newcastle University; Dexter Canoy, Newcastle University; Jaume Bacardit, Newcastle University; Anya Hurlbert, Newcastle University

We present a new methodology based on AI to classify sex based on fovea shape features extracted from OCT scans, to understand the underlying foveal variability between male and female. Deep neural networks have been used to classify sex from retinal images like colour fundus and OCT B-scans. It is possible to obtain heatmaps of regions of the retinal layers that are responsible for the network decision. However, it is still challenging to identify the exact set of features that influence these networks. Our methodology consists of leveraging AI to extract more than 50 foveal features and train machine learning classifiers, following theses steps: 1) segmentation of 4000 OCT scans of good image quality and of selected healthy controls (no eye disorders) from the UK Biobank; 2) feature extraction on segmented layer boundaries from four commonly used (fovea pit diameter, depth, nasal and temporal slopes), to extended features including the above mentioned for all boundaries and each layer thicknesses; 3) training machine learning classifiers and ranking distinct retinal features by importance. The performances of the top classifiers went from 0.55 ROC, slightly better than random chance on 4 initial features to 0.65 ROC, with 49 features, on a single B-scan segmentation. The results highlight the promise of our method in applying AI for the discovery of meaningful retinal biomarkers and the analysis of the fovea shape morphology by identifying appropriate foveal features.

Funding acknowledgements:  Neuroscience Fund, Centre for Transformative Neuroscience (NUCoRE), Newcastle University


(10) Increasing the luminance of primary colors increases the perception of warmth   Colin Gardner, University of Georgia

Co-authors: Lisa Renzi-Hammond, University of Georgia; Cassandra Mesh, University of Georgia; Jessica Lin, University of Georgia; Randy Hammond, University of Georgia

There is a long history of linking the perceptions of temperature and color: red (R) and yellow (Y) are considered warm, whereas blue (B) and green (G) are relatively cool. Past studies, however, have not varied the intensity of those colors to determine how the perception of temperature is influenced. To test this, we used four colored lights that were varied over five intensity levels. 20 young healthy subjects with normal color vision were tested. An optical system with a Xenon-arc light source, interference filters (peak l = 470, 516, 572, 652 nm), and a circular neutral density wedge to vary intensity were used. Temperature perception was assessed using an ordinal scale from – 5 (coolest) to +5 (warmest). The order of the colors used and the intensity levels were varied randomly. Considering the average across power levels, B (-1.87) and Y (+1.09) were considered the coolest, whereas G (+2.1) and R (+3.75) were considered the warmest colors. All colors, however, warmed with increasing intensity. A linear regression fit to the averaged data across luminance explained the majority of the variance: B (r2 = 0.78), Y (r2 = 0.93), G (r2 = 0.98) and R (r2 = 0.92). For example, the average rating for B went from -2.3 at the lowest intensity to -1.6 at the highest intensity. Like others, our data show that color is significantly (p<0.001) linked with temperature perception. Increasing the luminance of colors, however, consistently skews the perception toward increased warmth.

Funding acknowledgements:  N/A


(11) Assessing the relationship between the cone mosaic and AO-corrected visual acuity   Mina Gaffney, Biomedical Engineering, Marquette University and the Medical College of Wisconsin, Milwaukee, WI, United States

Co-authors: Joseph Kreis, Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI, United States; Heather Heitkotter, Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI, United States;  Emma Warr, Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States; Ashleigh Walesa, Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States; Katherine Hemsworth, Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States; Emily Kind, School of Medicine, Medical College of Wisconsin, Milwaukee, WI, United States; Pavan Tiruveedhula, Herbert Wertheim School of Optometry & Vision Science, University of California Berkeley, Berkeley, CA, United States; Austin Roorda, Herbert Wertheim School of Optometry & Vision Science, University of California Berkeley, Berkeley, CA, United States; William S. Tuten, Herbert Wertheim School of Optometry & Vision Science, University of California Berkeley, Berkeley, CA, United States; Joseph Carroll, Biomedical Engineering, Marquette University and the Medical College of Wisconsin, Milwaukee, WI, United States; Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI, United States; Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, WI, United States

Using adaptive optics (AO), it is possible to deliver near diffraction-limited stimuli to the human retina to assess the relationship between the cone mosaic and visual function. Here we sought to establish device-specific control data for future studies of individuals with retinal disease. We used AOSLO to quantify the cone mosaic and measure visual acuity in the dominant eye of 18 individuals (7M, 11F; 15-67 years) without retinal pathology. Average density at the cone density centroid was 186,925 cones/mm^2. Visual acuity was assessed using an AO-corrected Snellen E presented via a QUEST-driven four-alternative forced-choice task. The mean observed acuity across individuals was -0.23 (±0.08) logMAR. We compared observed acuity to that predicted by foveal cone spacing, using the average spacing within a given individual’s 95% bivariate contour ellipse area centered on an estimated preferred-retinal fixation locus. The mean (± SD) predicted acuity across individuals was -0.30 (±0.03) logMAR. The ratio of observed:predicted acuity ranged from 1.54 to 0.30 (average = 0.78). Six individuals had observed acuity equal to or better than that predicted by their foveal cone spacing, while the other 12 had an observed acuity worse than that predicted by their foveal cone spacing. These results warrant further examination of factors contributing to the variation of AO-based acuity measures, including experimental differences, internal response bias, and other biological factors.

Funding acknowledgements:  R01EY033580, R01EY017607, F31EY036692, F31EY033204, F31EY036709, T32EY014537, T35HL072483 


(12) Measuring chromatic contrast sensitivity functions from the fovea to far peripheries   Kotaro Kitakami, School of Computing, Tokyo Institute of Technology

Co-authors: Suguru Saito, School of Computing, Tokyo Institute of Technology; Keiji Uchikawa, Human Media Research Center, Kanagawa Institute of Technology

Human contrast sensitivity functions (CSF) are often utilized in applications such as foveated rendering and saliency analysis. They require CSFs up to higher eccentricities in the peripheral visual field than previous data. We measured chromatic CSF in the peripheral visual field at eccentricities of up to 49 degrees in the nasal and temporal directions of the left eye. The stimulus consisted of a cosine Gabor patch of 10-degree diameter across all eccentricities. We adjusted the luminance of the stimulus components for each eccentricity so that subjects perceived no achromatic change in the stimulus at any spatial frequency. The contrast thresholds were determined using a two-interval forced-choice procedure with the PSI method. The average background luminance was 31 cd/m2, and its chromaticity was CIE D65. The stimulus duration was 0.5sec with 0.5sec increasing and decreasing temporal slopes. The results in the blue-yellow direction show that the CSF differences in eccentricity are small at low spatial frequencies but become larger at higher spatial frequencies. The blue-yellow CSF exhibited a low-pass characteristic, which is consistent with previous studies. For eccentricities up to 21 degrees, the sensitivity tends to decline gradually below 1 cpd, whereas it does steeply above 1 cpd. For eccentricities greater than 21 degrees, 0.6 cpd was found as the change point instead of 1 cpd.

Funding acknowledgements:  JSPS KAKENHI Grant Number JP18H03247


(13) Chromatic and luminance contrast adaptation measured using pupillometry and SSVEP   Alex A. Carter, University of York

Co-authors: Abbie J. Lawton, University of York; Daniel H. Baker, University of York; Antony B. Morland, University of York; Lauren E. Welbourne, University of York; Alex R. Wade, NA

Previous evidence suggests both chromatic and luminance contrast adaptation reduces contrast sensitivity in the post-adaptation period. This effect is chromatically-tuned: adaptation is greatest when the adaptor and probe lie on the same chromatic axis. Previous fMRI data from our lab also suggests S-cone adaptation results in a paradoxical increase in post-stimulus BOLD signal. Here, we use SSVEP and pupillometry to ask whether we can measure physiological correlates of this adaptation. SSVEPs were recorded from V1 using a canonical EEG template (Poncet & Ales, 2023) and pupil diameter was measured using an EyeLink1000 for 21 participants. Stimuli were contrast-reversing chromatic or luminance disks (5Hz, 20° visual angle) presented during a 7s preprobe, 30s adaptation, and 7s probe period. The preprobe and probe were always the same chromaticity at 50% of the adapting contrast. Stimuli were L+M or S-cone isolating with four conditions overall (S/L+M) * (adapt/probe). Both pupils and V1 show an increase in size and response respectively after adaptation, contradicting the idea that adaptation reduces contrast sensitivity. Pupil data showed L+M adaptation increases pupil diameter more for L+M than S probe, with S adaptation having little effect. V1 showed an overall effect of adaptation, but no differences across conditions suggesting that the pupils are more sensitive to chromaticity in adaptation, while cortex shows adaptation independent of chromaticity.


(14) Study of contrast detection filters for images   Naoto Yoshii, School of Computing, Tokyo Institute of Technology

Co-author: Suguru Saito, School of Computing, Tokyo Institute of Technology

In perceptual contrast imaging, which removes only the contrast information humans cannot perceive from images, detecting local contrast per frequency is essential. It is known that simple and complex cells in the primary visual cortex (V1) are involved in human contrast detection. Therefore, in this study, we experimented with filters modeled after simple and complex cells for contrast detection to determine which filter is more suitable for perceptual contrast imaging. In the experiment, we modeled simple cells with Gabor filters and complex cells with the energy model, which is the square root of the sum of the squares of two Gabor filter responses with a phase difference of π/2. We used each model for contrast detection to generate perceptual contrast images and conducted subjective evaluation experiments to assess the distinguishability between the produced perceptual contrast images and the input images. In the experiment, subjects memorized an original input image to be selected in the next step. Then, subjects selected an image they believed to be the original in a two-interval forced-choice procedure. The results show that the accuracy in the complex cell model experiments was significantly close to 0.5, indicating no visual differences between the perceptual contrast images and their original, while the experiments of the simple cell model were not. Therefore, we concluded the complex cell model is more suitable for contrast detection in perceptual contrast imaging.

Funding acknowledgements:  JSPS KAKENHI Grant Number JP18H03247, JP24KJ1095


(15) Melanopsin modulation of cortical S-cone responses   Lauren E. Welbourne, University of York, UK

Co-authors: Joel T. Martin, University of Edinburgh, UK; Federico Segala, University of York, UK; Annie Morsi, University of York, UK; Alex Carter, University of York, UK; Alex R. Wade, University of York, UK; Daniel H. Baker, University of York, UK

Melanopsin is a non-image forming light sensitive retinal photopigment. Melanopsin activation takes longer to peak and has a prolonged response relative to cone photoreceptors. Recent evidence suggests that melanopsin-driven signals may influence vision (through cone photoreceptor modulation), but it is unclear whether melanopsin can directly stimulate visual cortex (e.g. V1), in addition to subcortical pathways. Our lab recently observed unusual fMRI time course responses in V1 for S-cone isolating stimuli, where the response was maintained for the duration of the stimulus ‘off’ period - it did not return to baseline after stimulus offset (at 12 seconds). We hypothesised that this was due to an effect of lingering melanopsin activation, which was activated by the S-cone isolating stimuli because we did not explicitly silence melanopsin in that study. In the present study, we used a custom-made multi-primary LED system, to create S-cone isolating stimuli that either activated or silenced melanopsin. Stimuli were presented in a block design, 15s ON / 30s OFF to allow time for a sustained response to return to baseline between conditions. Here we present evidence from 11 participants of melanopsin-driven responses in cortical area V1 - where the S-cone melanopsin-active condition showed a larger response after stimulus offset than the S-cone melanopsin-silenced condition.

Funding acknowledgements:  BBSRC Grant Number BB/V007580/1


(16) Contrast polarity in photopic, mesopic, and scotopic vision   Lisa Widmayer, University of Marburg

Co-author: Alexander C. Schütz, University of Marburg

The perception of dark and light shows considerable asymmetries. In scotopic vision, "white" patches appear gray – brightness perception is clipped at the upper end of the range. In photopic vision, darks are perceived as relatively more intense than lights. Here, we compared perception of contrast polarities in photopic, mesopic, and scotopic viewing. We tested the perception of positive and negative contrasts in the three viewing conditions, using a circular stimulus (2° radius) presented for 200 ms at 8° eccentricity. In a detection task, we obtained absolute thresholds for positive and negative contrasts. In a matching task, we obtained PSEs when participants compared stimuli with positive and negative contrasts at three contrast levels (0.2, 0.35, 0.5). Stimuli were defined in Weber’s contrast. In the detection task, thresholds were highest in scotopic and lowest in photopic vision. Thresholds were higher for positive than negative contrasts in mesopic and scotopic but not in photopic viewing. In the matching task, the asymmetry was increased in scotopic compared to photopic viewing, such that even more positive contrast was required to match negative contrasts. Our results show that the asymmetry in contrast perception depends on the lighting condition, being largest in scotopic vision in both tasks. As signals from cone receptors are crucial for perceiving "white", the clipped range in scotopic vision also seems to impede the perception of positive contrasts. 

Funding acknowledgements:  This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101001250) and by “The Adaptive Mind”, funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art.


(17) Modeling sensitivity to red and green small spots: The effect of cone topography and spectral sensitivity   Maxwell J. Greene, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA, USA

Co-authors: Vimal P. Panidyan, Department of Ophthalmology, University of Washington School of Medicine, Seattle, WA, USA; Ramkumar Sabesan, Department of Ophthalmology, University of Washington School of Medicine, Seattle, WA, USA; William S.  Tuten, Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA, USA

To understand how well sensitivity to small chromatic flashes is explained simply by cone activity, we compared thresholds to 543 nm (“green”) or 680 nm (“red”) incremental flashes with theoretical predictions. Thresholds were measured in two color-normal males for stimuli presented at ~2 deg eccentricity on an achromatic background through an adaptive optics platform. Stimuli subtended 2.25 arcminutes, were 67 ms in duration, and were stabilized on the retina. A strong, positive correlation between red sensitivity (normalized by green sensitivity) and the proportion of stimulated receptors classified as L-cones by optoretinography was found. Theoretical predictions of red and green thresholds were derived from two models. In the first model, sensitivity was assumed to depend only on the cone spectral sensitivities, scaled by the numbers of L and M-cones illuminated by the stimulus. In the second model, thresholds were computed based on cone isomerizations for ideal observers possessing the subjects’ actual cone mosaics. Both models account for much of the variance in the empirical data. For either model, residuals approach their minima when distinct, physiologically-plausible L-cone spectra (separated by 4.5 nm) are assumed for each subject. This suggests intersubject variation in L-cone photopigments. We intend to verify this potential variation by sequencing each subject’s cone opsins.

Funding acknowledgements:  Air Force Office of Scientific Research (FA955020- 1-0195, FA9550-21-1-0230), National Eye Institute, (R01EY023591, R01EY029710, U01EY032055, P30EY003176, P30EY001730, T32EY007043), Alcon Research Institute, Hellman Fellows Program, Burroughs Wellcome Fund Careers at the Scientific Interfaces, and an unrestricted grant from Research to Prevent Blindness


(18) On the relationship between chromatic sensitivity and vividness of illusory colored signals   Paolo Antonino Grasso, Department of Physics and Astronomy, University of Florence, Italy

Co-authors: Federico Tommasi, Department of Physics and Astronomy, University of Florence, Italy; Elisabetta Baldanzi, National Research Council, National Institute of Optics, Florence, Italy; Alessandro Farini, National Research Council, National Institute of Optics, Florence, Italy; Massimo Gurioli, Department of Physics and Astronomy, University of Florence, Italy

Color perception is an integral skill of vision aiding rapid segregation and categorization of objects in the environment. It depends on both low-level spectral analysis of the light reflected by an object and high-level interpretation of the retinal output. Although, interindividual differences in color perception can be mostly negligible in real life environments, there is some indication that these differences can be quite substantial when color appearance is considered. We developed a series of experiments with participants completing color matching tasks of illusory colored stimuli while also evaluating their individual chromatic sensitivity using a standardized color assessment test. Our results highlight large variability in color appearance with this variability being inversely related to the individual chromatic sensitivity. Furthermore, our data provide hints for the potential use of chromatic illusions in the assessment of interindividual differences in colour processing.

Funding acknowledgements:  European Union - PON Research and Innovation 2014–2020


(19) The effect of fixational eye-movements on the temporal summation at detection threshold: A simulation study   Zahra M. Bagheri, Department of Experimental Psychology, University of Oxford

Co-authors: Allie C.  Schneider, Department of Experimental Psychology, University of Oxford; Mengxin  Wang, Department of Experimental Psychology, University of Oxford; David H.  Brainard, Department of Psychology, University of Pennsylvania; Hannah E.  Smithson, Department of Experimental Psychology, University of Oxford

We explored how fixational eye movements (FEMs) affect threshold temporal summation of increment pulses using realistic simulations of early visual processing. Using the Image Systems Engineering Toolbox for Biology, we assessed performance in a spatial 2AFC increment detection task, where the observer identified whether a stimulus appeared on the left or right. The signal-known-exactly ideal observer was trained on the noise-free photocurrent output of the cone mosaic for both stimulus alternatives, with performance calculated using noisy instances of photocurrents, given FEMs knowledge. The stimuli, modelled as 0.24x2.2 arcmin increments of 543 nm light presented via an AOSLO, included both a single 2 ms flash and pairs of flashes separated by interstimulus intervals (ISI) of 17 ms, 33 ms, 100 ms, or 300 ms. Detection thresholds, defined as the stimulus contrast corresponding to 75% correct, were assessed with and without FEMs. Without FEMs, thresholds for detecting two flashes separated by 17-100 ms slightly increased with ISI but remained lower than those for a single flash. With FEMs, the modelled differences between single- and two-flash thresholds were less pronounced, suggesting that, at the level of photocurrent signals, FEMs reduce the benefits of temporal summation for detection. Future work will quantify this reduction by simulating FEMs with varying velocities and explore if adding a temporal adaptation stage improves effect of FEMs' on performance.

Funding acknowledgements:  UKRI Physics of Life EP/W023873/1


(20) Application of a DMD projector to assess contrast sensitivity perception by creating shades of grey for visual stimuli   Chiara Maria Mariani, Optics Group, School of Physics, University College Dublin, Ireland

Co-author: Brian Vohnsen, Optics Group, School of Physics, University College Dublin, Ireland

Greyscale quality on displays is crucial for contrast sensitivity perception. Digital Micro-mirror Devices (DMD) operating in the kHz range allow for different greyscale depths. This is crucial for accurate testing of contrast sensitivity and for accurate vision stimulation especially in vision-impaired patients, as there is growing interest around the applications of these devices. Here, we explore a DMD projector (Vialux) for monocular contrast sensitivity determination in 5 healthy subjects (ages 23 – 55 years) using Gabor patches at different orientations, contrasts, and spatial frequencies. Subjects wore their habitual refractive correction while CS was tested at the fovea. For all subjects, we found that increasing the number of bitplanes used to project stimuli resulted in a more accurate and uniform representation of greyscale. Ultimately, the technology will be used for novel instrumentation that will help vision-impaired patients suffering from age-dependent vision loss. These findings may impact the choice of display technologies in clinical settings where precise visual evaluation is essential.

Funding acknowledgements:  Financial support from Horizon MSCA 2022-DN-01 "ACTIVA" Project 101119695.


(21) Characterising colour processing in anomalous trichromacy with steady-state visually evoked potentials   Ana Rozman, University of Sussex

Co-authors: Lucy P Somers, University of Sussex; Jenny M Bosten, University of Sussex

Colour vision is based on the capture of light by short (S), medium (M) and long (L) wavelength sensitive retinal cones. In postreceptoral colour processing, the outputs of the three cone types are first compared by two cone-opponent mechanisms, L/(L+M) and S/(L+M). In anomalous trichromacy, the separation between L and M cone peak spectral sensitivities is reduced compared to normal trichromacy, leading to decreased sensitivity for L/(L+M) colour differences. However, colour appearance is more similar to that of normal trichromats than cone-opponent models predict. Current evidence suggests this is due to postreceptoral compensation in the cortex, where reduced colour signals are amplified to use available neural resources. We devised a novel approach to investigate the site of postreceptoral compensation using steady-state visually evoked potentials (SSVEPs), captured by electroencephalography. We measured signals in response to flickering stimuli designed to isolate the S/(L+M) and L/(L+M) cone opponent mechanisms at both retinal and cortical sites. If compensation is cortical, we would expect any reduction for anomalous trichromats in retinal L/(L+M) SSVEP signals compared to S/(L+M) SSVEP signals to be rectified at the cortical site. Our study did not exclude the possibility of retinal compensation, in contrast to an existing fMRI study (Tregillus et al., 2021, Curr. Biol.). We present our novel method to address potential challenges in characterising these processes.

Funding acknowledgements:  The study was funded by ERC grant 949242 COLOURCODE to JMB. 


(22) Geometric phase multifocal ophthalmic lenses for presbyopics   Sunil Kumar Chaubey, Optics Group, School of Physics, University College Dublin, Ireland

Co-author: Brian Vohnsen, Optics Group, School of Physics, University College Dublin

A lens that makes use of polarization modulation to alter the geometric phase of light is being studied as a promising optical element for presbyopia. The lens is formed with liquid crystals grating structures act as a converging lens for right-handed circularly polarized light and a diverging lens for left-handed circularly polarized light resulting in positive and negative defocus for unpolarized incident light. Integrating a higher focal length geometric phase lens with a shorter converging lens creates bifocality. The geometric phase ophthalmic lenses can provide improved visual outcomes without undesired cosmesis effects. We demonstrate that such lenses can be a prominent alternative to multifocal lenses for presbyopics. We evaluate its applicability using wave optics analysis with a model eye to determine its pros and cons.

Funding acknowledgements:  Horizon MSCA 2022-DN-01; ACTIVA; Project 101119695


(23) Characteristic differences in eye movements in people with Parkinson’s disease   Varun Padikal, Biosciences Institute, Newcastle University, Newcastle, United Kingdom

Co-authors: Maria  Villamil, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; Penny Lawton, Biosciences Institute, Newcastle University, Newcastle, United Kingdom; Jiahe Cui, Department of Engineering Science, University of Oxford, Oxford, United Kingdom; Dana  Turner, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; Allie C. Schneider, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; Hannah Smithson, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; Jenny Read, Biosciences Institute, Newcastle University, Newcastle, United Kingdom; Laura Young, Biosciences Institute, Newcastle University, Newcastle, United Kingdom

Parkinson's disease is a neurodegenerative condition that impairs motor control, including oculomotor function. This study focuses on identifying characteristic differences in eye movements between people with Parkinson's diseases and healthy controls across four different tasks. Two static tasks, a fixation task and a tumbling E task, were measured using an adaptive optics scanning laser ophthalmoscope (AOSLO) to capture fixational eye movements with high spatial and temporal resolution. Two moving target tasks, guided saccade and smooth pursuit, were measured using the EyeLink 1000 Plus to study large-scale eye movements. Three participants with Parkinson’s disease and three healthy control performed these tasks. In the fixation task, the participant fixated on a static target for 5s, and in the tumbling E task, an 'E' near the acuity limit was presented in different orientations for 0.7s. The guided saccade task required the participant to quickly shift their gaze between locations separated by 11ᵒ, while the smooth pursuit task involved tracking a smoothly moving target at two different speeds: 10ᵒ/s and 20ᵒ/s. Both moving target tasks were performed in horizontal and vertical directions. Oculomotor parameters such as saccade amplitude, saccade velocity, saccadic delay, saccadic rate, and intersaccadic intervals were extracted from the eye movement trace. Here we report the differences in these parameters between healthy controls and participants with Parkinson’s disease.


(24) Comparisons of comparisons of numerosity   Joshua Solomon, City St. George's, University of London

The purported bidirectionality of numerosity adaptation was tested using the comparison-of-comparisons technique, which is ostensibly resistant to certain types of non-perceptual bias. Two participants (including JAS) were given these instructions: 'Adapting stimuli will be the exposed for 5 seconds in the top two quadrants. After adaptation, test stimuli will appear in all four quadrants. In each of the top two quadrants, there will be 50 dots. Select the lower quadrant whose numerosity is FARTHEST from 50. (It may be higher or lower.)' There were 100 or 25 dots in each of the two upper quadrants during adaptation. Trials were blocked by this adaptation numerosity. Bias downward, in which responses suggest the appearance of fewer than 50 dots in each upper quadrant, was significant for both participants with 100-dot adapting stimuli, confirming that this paradigm is adequate for establishing adaptation-induced perceptual biases. Neither participant’s data suggested any bias (upward or downward) with 25-dot adapting stimuli.


(25) Deviation mapping for foveal cone mosaic topography   Jenna Grieshop, Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI USA

Co-authors: Emma Warr, Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI USA; Ashleigh Walesa, Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI USA; Katherine Hemsworth, Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI USA; Joseph Carroll, Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, WI USA; Joint Dept. of Biomedical Engineering, Marquette University and the Medical College of Wisconsin, Milwaukee, WI USA; Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, WI USA

Deviation mapping is commonly used across retinal imaging modalities. Here we compiled data from two labs (UC Berkley[1] & MCW) to create an AOSLO-specific deviation mapping tool for measures of the foveal cone mosaic. Foveal cones were identified for 87 normative regions of interest (ROIs) (26M, 61F; 13-67 yrs, median=26 yrs) and for 5 pathological ROIs (2 Bornholm Eye Disease, 3 Albinism; 1M, 4F; 16-50 yrs, median=42 yrs). ROIs were cropped and resized to a common scale for comparison. Density and nearest neighbor distance (NND) maps were generated for each ROI, and the cone density centroid[2] (CDC) was determined for each map. Normative maps were aligned using these CDC locations, and average and standard deviation (SD) maps were created for both density and NND. Pathology maps were compared to these normative composite maps. At the CDC, average (SD) density was 1.79E+5 (2.55E+4) cones/mm^2 and average (SD) NND was 2.08 (0.16) µm. For pathological ROIs, the percentage of pixels within 1 SD of the normative data was comparable for density and NND except in two individuals where density was more deviant than NND (consistent with mosaic irregularity and/or random cone loss). Deviation mapping applied to foveal AOSLO data can be used to assess the normality of individual foveal ROIs. Comparing deviation maps across different metrics may provide valuable insight into the underlying properties of the cone mosaic in various retinal pathologies. 1) PMID:31348002 2) PMID:34343479

Funding acknowledgements:  R01EY017607, R01EY033580


(26) The optics of myopia onset and its potential impact on halting progression   Brian Vohnsen, University College Dublin

Human genetics are the result of millennia of development with the eye optimized for outdoor vision. Recent changes in lifestyle have perturbed emmetropization and lead to frequent excessive eye growth.  Myopia onset happens typically in school years when the process of emmetropization can be compromised. To prevent this from happening it is encouraged to spend more time outdoors. Being outdoors is associated with reduced dioptric demands, higher brightness and dopamine release, different spectral and special frequencies, and potential changes in choroidal thickness. Most apparent, outdoors the pupil size is smaller which narrows the pencil of light falling onto both the macula and the peripheral retina. This reduces the risk of light leaking out of cone and rod photoreceptors and thus maximizes the light capture efficiency of the layered visual pigments. Here, the role of pupil size is analyzed using ray optics for a schematic eye model and schematic elongated photoreceptors. Each photoreceptor does not perceive an image, but only operates to maximize its light capture. It is found that a 3 mm pupil is the ideal to prevent leakage of light from photoreceptors. Axial elongation moves the retina further away from the pupil thereby reducing the perceived solid angle at the photoreceptors. How these findings may impact on lens designs used to prevent myopia progression is discussed.

Funding acknowledgements:  Financial support from Horizon MSCA 2022-DN-01 & “ACTIVA”: Project 101119695


(27) Abnormal GSK3β activity affects Drosophila vision   Oscar Solis, University of York

Co-authors: Alex Wade, University of York; Ines Hahn, University of York

The enzyme, glycogen synthase kinase 3β (GSK3β), plays a key role in the development and maintenance of axons in neurons. Recent work has revealed microtubule (MT) unbundling in the axons of neurons with overactive or inactive GSK3β and it is hypothesised that this phenotype is tightly related to impaired axonal transport and synaptic defects. To test whether abnormal GSK3β activity affects neuronal function, the Drosophila visual system was probed using the steady-state visually evoked potential (SSVEP) technique. This involved recording electrophysiological responses from the fly eye presented with stimuli consisting of flickering light. Analyses revealed abnormal visual responses in flies expressing overactive or inactive GSK3β compared to control flies. To test whether this was due to MT unbundling, flies were fed with the MT-stabilising drug, Epothilone B (EpoB). The drug did not rescue the visual defects but instead led to adverse effects. Taken together, these results suggest that GSK3β dysregulation leads to neuronal dysfunction in the fly visual system, but MT unbundling may not be the sole mechanism underlying these visual defects.