Silvasti, Sanni A.Valkonen, Janne K.Nokelainen, Ossi
12页
查看更多>>摘要:Vision is a vital attribute to foraging, navigation, mate selection and social signalling in animals, which often have a very different colour perception in comparison to humans. For understanding how animal colour perception works, vision models provide the smallest colour difference that animals of a given species are assumed to detect. To determine the just-noticeable-difference, or JND, vision models use Weber fractions that set discrimination thresholds of a stimulus compared to its background. However, although vision models are widely used, they rely on assumptions of Weber fractions since the exact fractions are unknown for most species. Here, we test; i) which Weber fractions in long-, middle- and shortwave (i.e. L, M, S) colour channels best describe the blue tit (Cyanistes caeruleus) colour discrimination, ii) how changes in hue of saturated colours and iii) chromatic background noise impair search behaviour in blue tits. We show that the behaviourally verified Weber fractions on achromatic backgrounds were L: 0.05, M: 0.03 and S: 0.03, indicating a high colour sensitivity. In contrast, on saturated chromatic backgrounds, the correct Weber fractions were considerably higher for L: 0.20, M: 0.17 and S: 0.15, indicating a less detailed colour perception. Chromatic complexity of backgrounds affected the longwave channel, while middle- and shortwave channels were mostly unaffected. We caution that using a vision model whereby colour discrimination is determined in achromatic viewing conditions, as they often are, can lead to misleading interpretations of biological interactions in natural - colourful - environments.
Silvasti, Sanni A.Valkonen, Janne K.Nokelainen, Ossi
12页
查看更多>>摘要:Vision is a vital attribute to foraging, navigation, mate selection and social signalling in animals, which often have a very different colour perception in comparison to humans. For understanding how animal colour perception works, vision models provide the smallest colour difference that animals of a given species are assumed to detect. To determine the just-noticeable-difference, or JND, vision models use Weber fractions that set discrimination thresholds of a stimulus compared to its background. However, although vision models are widely used, they rely on assumptions of Weber fractions since the exact fractions are unknown for most species. Here, we test; i) which Weber fractions in long-, middle- and shortwave (i.e. L, M, S) colour channels best describe the blue tit (Cyanistes caeruleus) colour discrimination, ii) how changes in hue of saturated colours and iii) chromatic background noise impair search behaviour in blue tits. We show that the behaviourally verified Weber fractions on achromatic backgrounds were L: 0.05, M: 0.03 and S: 0.03, indicating a high colour sensitivity. In contrast, on saturated chromatic backgrounds, the correct Weber fractions were considerably higher for L: 0.20, M: 0.17 and S: 0.15, indicating a less detailed colour perception. Chromatic complexity of backgrounds affected the longwave channel, while middle- and shortwave channels were mostly unaffected. We caution that using a vision model whereby colour discrimination is determined in achromatic viewing conditions, as they often are, can lead to misleading interpretations of biological interactions in natural - colourful - environments.
Stiles, Noelle R. B.Patel, Vivek R.Weiland, James D.
11页
查看更多>>摘要:Crossmodal mappings associate features (such as spatial location) between audition and vision, thereby aiding sensory binding and perceptual accuracy. Previously, it has been unclear whether patients with artificial vision will develop crossmodal mappings despite the low spatial and temporal resolution of their visual perception (particularly in light of the remodeling of the retina and visual cortex that takes place during decades of vision loss). To address this question, we studied crossmodal mappings psychophysically in Retinitis Pigmentosa patients with partial visual restoration by means of Argus II retinal prostheses, which incorporate an electrode array implanted on the retinal surface that stimulates still-viable ganglion cells with a video stream from a head-mounted camera. We found that Argus II patients (N = 10) exhibit significant crossmodal mappings between auditory location and visual location, and between auditory pitch and visual elevation, equivalent to those of age-matched sighted controls (N = 10). Furthermore, Argus II patients (N = 6) were able to use crossmodal mappings to locate a visual target more quickly with auditory cueing than without. Overall, restored artificial vision was shown to interact with audition via crossmodal mappings, which implies that the reorganization during blindness and the limitations of artificial vision did not prevent the relearning of crossmodal mappings. In particular, cueing based on crossmodal mappings was shown to improve visual search with a retinal prosthesis. This result represents a key first step toward leveraging crossmodal interactions for improved patient visual functionality.
Weiland, James D.Stiles, Noelle R. B.Patel, Vivek R.
11页
查看更多>>摘要:Crossmodal mappings associate features (such as spatial location) between audition and vision, thereby aiding sensory binding and perceptual accuracy. Previously, it has been unclear whether patients with artificial vision will develop crossmodal mappings despite the low spatial and temporal resolution of their visual perception (particularly in light of the remodeling of the retina and visual cortex that takes place during decades of vision loss). To address this question, we studied crossmodal mappings psychophysically in Retinitis Pigmentosa patients with partial visual restoration by means of Argus II retinal prostheses, which incorporate an electrode array implanted on the retinal surface that stimulates still-viable ganglion cells with a video stream from a head-mounted camera. We found that Argus II patients (N = 10) exhibit significant crossmodal mappings between auditory location and visual location, and between auditory pitch and visual elevation, equivalent to those of age-matched sighted controls (N = 10). Furthermore, Argus II patients (N = 6) were able to use crossmodal mappings to locate a visual target more quickly with auditory cueing than without. Overall, restored artificial vision was shown to interact with audition via crossmodal mappings, which implies that the reorganization during blindness and the limitations of artificial vision did not prevent the relearning of crossmodal mappings. In particular, cueing based on crossmodal mappings was shown to improve visual search with a retinal prosthesis. This result represents a key first step toward leveraging crossmodal interactions for improved patient visual functionality.
Wegner, Thomas G. G.Grenzebach, JanBendixen, AlexandraEinhaeuser, Wolfgang...
20页
查看更多>>摘要:In multistability, perceptual interpretations ("percepts") of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overlaid drifting gratings, which participants perceived as individual gratings passing in front of each other ("segregated") or as a plaid ("integrated"). We varied the enclosed angle ("opening angle") between the gratings (experiments 1 and 2) and stimulus orientation (experiment 2). The relative number of integrated percepts increased monotonically with opening angle. The point of equality, where half of the percepts were integrated, was at a smaller opening angle at onset than during prolonged viewing. The functional dependence of the relative number of integrated percepts on opening angle showed a steeper curve at onset than during prolonged viewing. Dominance durations of integrated percepts were longer at onset than during prolonged viewing and increased with opening angle. The general pattern persisted when stimuli were rotated (experiment 2), despite some perceptual preference for cardinal motion directions over oblique directions. Analysis of eye movements, specifically the slow phase of the optokinetic nystagmus (OKN), confirmed the veridicality of participants' reports and provided a temporal characterization of percept formation after stimulus onset. Together, our results show that the first percept after stimulus onset exhibits a different dependence on stimulus parameters than percepts during prolonged viewing. This underlines the distinct role of the first percept in multistability.
Wegner, Thomas G. G.Grenzebach, JanBendixen, AlexandraEinhaeuser, Wolfgang...
20页
查看更多>>摘要:In multistability, perceptual interpretations ("percepts") of ambiguous stimuli alternate over time. There is considerable debate as to whether similar regularities govern the first percept after stimulus onset and percepts during prolonged presentation. We address this question in a visual pattern-component rivalry paradigm by presenting two overlaid drifting gratings, which participants perceived as individual gratings passing in front of each other ("segregated") or as a plaid ("integrated"). We varied the enclosed angle ("opening angle") between the gratings (experiments 1 and 2) and stimulus orientation (experiment 2). The relative number of integrated percepts increased monotonically with opening angle. The point of equality, where half of the percepts were integrated, was at a smaller opening angle at onset than during prolonged viewing. The functional dependence of the relative number of integrated percepts on opening angle showed a steeper curve at onset than during prolonged viewing. Dominance durations of integrated percepts were longer at onset than during prolonged viewing and increased with opening angle. The general pattern persisted when stimuli were rotated (experiment 2), despite some perceptual preference for cardinal motion directions over oblique directions. Analysis of eye movements, specifically the slow phase of the optokinetic nystagmus (OKN), confirmed the veridicality of participants' reports and provided a temporal characterization of percept formation after stimulus onset. Together, our results show that the first percept after stimulus onset exhibits a different dependence on stimulus parameters than percepts during prolonged viewing. This underlines the distinct role of the first percept in multistability.
查看更多>>摘要:In this work, we examined the color tuning of units in the hidden layers of AlexNet, VGG-16 and VGG-19 convolutional neural networks and their relevance for the successful recognition of an object. We first selected the patches for which the units are maximally responsive among the 1.2 M images of the ImageNet training dataset. We segmented these patches using a k-means clustering algorithm on their chromatic distribution. Then we independently varied the color of these segments, both in hue and chroma, to measure the unit's chromatic tuning. The models exhibited properties at times similar or opposed to the known chromatic processing of biological system. We found that, similarly to the most anterior occipital visual areas in primates, the last convolutional layer exhibited high color sensitivity. We also found the gradual emergence of single to double opponent kernels. Contrary to cells in the visual system, however, these kernels were selective for hues that gradually transit from being broadly distributed in early layers, to mainly falling along the blue-orange axis in late layers. In addition, we found that the classification performance of our models varies as we change the color of our stimuli following the models' kernels properties. Performance was highest for colors the kernels maximally responded to, and images responsible for the activation of color sensitive kernels were more likely to be mis-classified as we changed their color. These observations were shared by all three networks, thus suggesting that they are general properties of current convolutional neural networks trained for object recognition.
查看更多>>摘要:In this work, we examined the color tuning of units in the hidden layers of AlexNet, VGG-16 and VGG-19 convolutional neural networks and their relevance for the successful recognition of an object. We first selected the patches for which the units are maximally responsive among the 1.2 M images of the ImageNet training dataset. We segmented these patches using a k-means clustering algorithm on their chromatic distribution. Then we independently varied the color of these segments, both in hue and chroma, to measure the unit's chromatic tuning. The models exhibited properties at times similar or opposed to the known chromatic processing of biological system. We found that, similarly to the most anterior occipital visual areas in primates, the last convolutional layer exhibited high color sensitivity. We also found the gradual emergence of single to double opponent kernels. Contrary to cells in the visual system, however, these kernels were selective for hues that gradually transit from being broadly distributed in early layers, to mainly falling along the blue-orange axis in late layers. In addition, we found that the classification performance of our models varies as we change the color of our stimuli following the models' kernels properties. Performance was highest for colors the kernels maximally responded to, and images responsible for the activation of color sensitive kernels were more likely to be mis-classified as we changed their color. These observations were shared by all three networks, thus suggesting that they are general properties of current convolutional neural networks trained for object recognition.