Loading clinical trials...
Loading clinical trials...
Showing 1-4 of 4 trials
NCT06852534
How does one know what to look at in a scene? Imagine a "Where's Waldo" game - it's challenging to find Waldo because there are many 'salient' locations in the picture, each vying for one's attention. One can only attend to a small location on the picture at a given moment, so to find Waldo, one needs to direct their attention to different locations. One prominent theory about how one accomplishes this claims that important locations are identified based on distinct feature types (for example, motion or color), with locations most unique compared to the background most likely to be attended. An important component of this theory is that individual feature dimensions (again, color or motion) are computed within their own 'feature maps', which are thought to be implemented in specific brain regions. However, whether and how specific brain regions contribute to these feature maps remains unknown. The goal of this study is to determine how brain regions that respond strongly to different feature types (color and motion) and which encode spatial locations of visual stimuli extract 'feature dimension maps' based on stimulus properties, including feature contrast. The investigators hypothesize that feature-selective brain regions act as neural feature dimension maps, and thus encode representations of salient location(s) based on their preferred feature dimension. The investigators will collect eye-tracking data while participants view visual stimuli made salient based on different combinations of feature dimensions. From the eye-tracking data, the investigators will construct fixation heat maps on the feature dimensions for all levels of salience, allowing them to connect behavioral data to the latter fMRI dataset. Each participant will freely view the stimuli as they appear on the computer display. Across trials, the investigators will manipulate 1) the 'strength' of the salient locations based on how different the salient stimulus is compared to the background, 2) the number of salient locations, and 3) the feature value(s) used to make each location salient. Altogether, these manipulations will help the investigators fully understand these critical salience computations in the healthy human visual system.
NCT06175312
How does one know what to look at in a scene? Imagine a "Where's Waldo" game - it's challenging to find Waldo because there are many 'salient' locations in the picture, each vying for one's attention. One can only attend to a small location on the picture at a given moment, so to find Waldo, one needs to direct their attention to different locations. One prominent theory about how one accomplishes this claims that important locations are identified based on distinct feature types (for example, motion or color), with locations most unique compared to the background most likely to be attended. An important component of this theory is that individual feature dimensions (again, color or motion) are computed within their own 'feature maps', which are thought to be implemented in specific brain regions. However, whether and how specific brain regions contribute to these feature maps remains unknown. The goal of this study is to determine how brain regions that respond strongly to different feature types (color and motion) and which encode spatial locations of visual stimuli extract 'feature dimension maps' based on stimulus properties, including feature contrast. The investigators hypothesize that feature-selective brain regions act as neural feature dimension maps, and thus encode representations of salient location(s) based on their preferred feature dimension. The investigators will scan healthy human participants using functional MRI (fMRI) in a repeated-measures design while they view visual stimuli made salient based on different combinations of feature dimensions. The investigators will employ state-of-the-art multivariate analysis techniques that allow them to reconstruct an 'image' of the stimulus representation encoded by each brain region to dissect how neural tissue identifies salient locations. Each participant will perform a challenging task at the center of the screen to ensure they keep their eyes still and ignore the stimuli presented in the periphery, which are used to gauge how the visual system automatically extracts important locations without confounding factors like eye movements. Across trials and experiments the investigators will manipulate 1) the 'strength' of the salient locations based on how different the salient stimulus is compared to the background, 2) the number of salient locations, and 3) the feature value(s) used to make each location salient. Altogether, these manipulations will help the investigators fully understand these critical salience computations in the healthy human visual system.
NCT06733467
How does one know what to look at in a scene? Imagine a "Where's Waldo" game - it's challenging to find Waldo because there are many 'salient' locations in the picture, each vying for one's attention. One can only attend to a small location on the picture at a given moment, so to find Waldo, one needs to direct their attention to different locations. One prominent theory about how one accomplishes this claims that important locations are identified based on distinct feature types (for example, motion or color), with locations most unique compared to the background most likely to be attended. An important component of this theory is that individual feature dimensions (again, color or motion) are computed within their own 'feature maps', which are thought to be implemented in specific brain regions. However, whether and how specific brain regions contribute to these feature maps, along with their role in supporting memory of visual information over brief delays, remains unknown. The goal of this study is to determine how brain regions that respond strongly to different feature types (color and motion) and which encode spatial locations of visual stimuli contribute to memory of visual features. Based on previous studies, the investigators hypothesize that feature-selective brain regions act as neural feature dimension maps, and thus encode representations of relevant location(s) based on their preferred feature dimension, such that the stimulus representation in the most relevant feature map is maintained over a memory delay period to support adaptive behavior. The investigators will scan healthy human participants using functional MRI (fMRI) in a repeated-measures design while they view and remember different features of visual stimuli (e.g., color or motion). The investigators will employ state-of-the-art multivariate analysis techniques that allow them to reconstruct an 'image' of the stimulus representation encoded by each brain region to dissect how neural tissue identifies salient locations. Each participant will recall the remembered feature value (color or motion) of a stimulus presented in the periphery. Across trials the investigators will manipulate the remembered feature value (color, motion, or attend to nothing). This manipulation will help the investigators fully understand these critical relevance computations in the healthy human visual system.
NCT06281457
How does one know what to look at in a scene? Imagine a "Where's Waldo" game - it's challenging to find Waldo because there are many 'salient' locations in the picture, each vying for one's attention. One can only attend to a small location on the picture at a given moment, so to find Waldo, one needs to direct their attention to different locations. One prominent theory about how one accomplishes this claims that important locations are identified based on distinct feature types (for example, motion or color), with locations most unique compared to the background most likely to be attended. An important component of this theory is that individual feature dimensions (again, color or motion) are computed within their own 'feature maps', which are thought to be implemented in specific brain regions. However, whether and how specific brain regions contribute to these feature maps remains unknown. The goal of this study is to determine how brain regions that respond strongly to different feature types (color and motion) and which encode spatial locations of visual stimuli transform 'feature dimension maps' based on stimulus properties as a function of task instructions. The investigators hypothesize that feature-selective brain regions act as neural feature dimension maps, and thus encode representations of relevant location(s) based on their preferred feature dimension, such that the stimulus representation in the most relevant feature map is up-regulated to support adaptive behavior. The investigators will scan healthy human participants using functional MRI (fMRI) in a repeated-measures design while they view visual stimuli made relevant based on a cued feature dimension (e.g., color or motion). The investigators will employ state-of-the-art multivariate analysis techniques that allow them to reconstruct an 'image' of the stimulus representation encoded by each brain region to dissect how neural tissue identifies salient locations. Each participant will perform a challenging discrimination task based on the cued feature (report motion direction or color of stimulus dots) of a stimulus presented in the periphery, which are identical across trial types. Across trials the investigators will manipulate the attended feature value (color, motion, or fixation point). This manipulation will help the investigators fully understand these critical relevance computations in the healthy human visual system.