Scores on benchmarks
Model rank shown below is with respect to all public models..303 |
average_vision
rank 124
81 benchmarks |
|
.310 |
neural_vision
rank 114
38 benchmarks |
|
.324 |
V1
rank 250
24 benchmarks |
|
.232 |
FreemanZiemba2013.V1-pls
v2
[reference]
rank 345
|
|
recordings from
102
sites in
V1
315 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.673 |
Marques2020
[reference]
rank 282
22 benchmarks |
|
.795 |
V1-orientation
rank 262
7 benchmarks |
|
.961 |
Marques2020_DeValois1982-pref_or
v1
rank 132
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.649 |
Marques2020_Ringach2002-circular_variance
v1
rank 307
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.848 |
Marques2020_Ringach2002-cv_bandwidth_ratio
v1
rank 178
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.861 |
Marques2020_Ringach2002-opr_cv_diff
v1
rank 183
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.641 |
Marques2020_Ringach2002-or_bandwidth
v1
rank 313
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.986 |
Marques2020_Ringach2002-or_selective
v1
rank 78
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.622 |
Marques2020_Ringach2002-orth_pref_ratio
v1
rank 298
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.826 |
V1-receptive_field_size
rank 47
2 benchmarks |
|
.844 |
Marques2020_Cavanaugh2002-grating_summation_field
v1
[reference]
rank 61
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.808 |
Marques2020_Cavanaugh2002-surround_diameter
v1
[reference]
rank 33
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.700 |
V1-response_magnitude
rank 311
3 benchmarks |
|
.586 |
Marques2020_FreemanZiemba2013-max_noise
v1
[reference]
rank 311
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.720 |
Marques2020_FreemanZiemba2013-max_texture
v1
[reference]
rank 292
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.794 |
Marques2020_Ringach2002-max_dc
v1
rank 330
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.578 |
V1-response_selectivity
rank 294
4 benchmarks |
|
.571 |
Marques2020_FreemanZiemba2013-texture_selectivity
v1
[reference]
rank 311
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.645 |
Marques2020_FreemanZiemba2013-texture_sparseness
v1
[reference]
rank 225
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.899 |
Marques2020_FreemanZiemba2013-texture_variance_ratio
v1
[reference]
rank 48
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.198 |
Marques2020_Ringach2002-modulation_ratio
v1
rank 337
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.722 |
V1-spatial_frequency
rank 255
3 benchmarks |
|
.843 |
Marques2020_DeValois1982-peak_sf
v1
rank 42
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.550 |
Marques2020_Schiller1976-sf_bandwidth
v1
[reference]
rank 319
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.773 |
Marques2020_Schiller1976-sf_selective
v1
[reference]
rank 239
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.685 |
V1-surround_modulation
rank 138
1 benchmark |
|
.685 |
Marques2020_Cavanaugh2002-surround_suppression_index
v1
[reference]
rank 138
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.404 |
V1-texture_modulation
rank 334
2 benchmarks |
|
.242 |
Marques2020_FreemanZiemba2013-abs_texture_modulation_index
v1
[reference]
rank 338
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.567 |
Marques2020_FreemanZiemba2013-texture_modulation_index
v1
[reference]
rank 314
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.066 |
Coggan2024_fMRI.V1-rdm
v1
rank 68
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.187 |
V2
rank 97
2 benchmarks |
|
.321 |
FreemanZiemba2013.V2-pls
v2
[reference]
rank 102
|
|
recordings from
103
sites in
V2
315 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.052 |
Coggan2024_fMRI.V2-rdm
v1
rank 107
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.396 |
V4
rank 58
5 benchmarks |
|
.582 |
MajajHong2015.V4-pls
v3
[reference]
rank 122
|
|
recordings from
88
sites in
V4
2560 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.618 |
Sanghavi2020.V4-pls
v1
[reference]
rank 195
|
|
recordings from
47
sites in
V4
5760 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.488 |
SanghaviJozwik2020.V4-pls
v1
[reference]
rank 98
|
|
recordings from
50
sites in
V4
4916 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.213 |
SanghaviMurty2020.V4-pls
v1
[reference]
rank 141
|
|
recordings from
46
sites in
V4
300 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.079 |
Coggan2024_fMRI.V4-rdm
v1
rank 41
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.334 |
IT
rank 115
7 benchmarks |
|
.265 |
Bracci2019.anteriorVTC-rdm
v1
rank 88
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.540 |
MajajHong2015.IT-pls
v3
[reference]
rank 88
|
|
recordings from
168
sites in
IT
2560 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.535 |
Sanghavi2020.IT-pls
v1
[reference]
rank 107
|
|
recordings from
88
sites in
IT
5760 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.493 |
SanghaviJozwik2020.IT-pls
v1
[reference]
rank 173
|
|
recordings from
26
sites in
IT
4916 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.388 |
SanghaviMurty2020.IT-pls
v1
[reference]
rank 87
|
|
recordings from
29
sites in
IT
300 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.114 |
Coggan2024_fMRI.IT-rdm
v1
rank 144
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.295 |
behavior_vision
rank 131
43 benchmarks |
|
.180 |
Geirhos2021-error_consistency
[reference]
rank 152
17 benchmarks |
|
.214 |
Geirhos2021colour-error_consistency
v1
[reference]
rank 181
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.140 |
Geirhos2021contrast-error_consistency
v1
[reference]
rank 155
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.171 |
Geirhos2021cueconflict-error_consistency
v1
[reference]
rank 146
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.084 |
Geirhos2021edge-error_consistency
v1
[reference]
rank 131
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.276 |
Geirhos2021eidolonI-error_consistency
v1
[reference]
rank 174
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.267 |
Geirhos2021eidolonII-error_consistency
v1
[reference]
rank 165
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.258 |
Geirhos2021eidolonIII-error_consistency
v1
[reference]
rank 166
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.251 |
Geirhos2021falsecolour-error_consistency
v1
[reference]
rank 149
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.050 |
Geirhos2021highpass-error_consistency
v1
[reference]
rank 161
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.186 |
Geirhos2021lowpass-error_consistency
v1
[reference]
rank 104
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.049 |
Geirhos2021phasescrambling-error_consistency
v1
[reference]
rank 221
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.040 |
Geirhos2021powerequalisation-error_consistency
v1
[reference]
rank 215
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.140 |
Geirhos2021rotation-error_consistency
v1
[reference]
rank 123
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.437 |
Geirhos2021silhouette-error_consistency
v1
[reference]
rank 133
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.105 |
Geirhos2021sketch-error_consistency
v1
[reference]
rank 122
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.278 |
Geirhos2021stylized-error_consistency
v1
[reference]
rank 121
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.121 |
Geirhos2021uniformnoise-error_consistency
v1
[reference]
rank 136
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.258 |
Baker2022
rank 111
3 benchmarks |
|
.287 |
Baker2022fragmented-accuracy_delta
v1
[reference]
rank 112
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.485 |
Baker2022frankenstein-accuracy_delta
v1
[reference]
rank 79
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.000 |
Baker2022inverted-accuracy_delta
v1
[reference]
rank 55
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.215 |
BMD2024
rank 49
4 benchmarks |
|
.146 |
BMD2024.dotted_1Behavioral-accuracy_distance
v1
rank 97
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.216 |
BMD2024.dotted_2Behavioral-accuracy_distance
v1
rank 33
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.279 |
BMD2024.texture_1Behavioral-accuracy_distance
v1
rank 35
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.220 |
BMD2024.texture_2Behavioral-accuracy_distance
v1
rank 61
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.351 |
Ferguson2024
[reference]
rank 186
14 benchmarks |
|
.389 |
Ferguson2024circle_line-value_delta
v1
[reference]
rank 87
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.358 |
Ferguson2024color-value_delta
v1
[reference]
rank 155
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.237 |
Ferguson2024convergence-value_delta
v1
[reference]
rank 149
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.199 |
Ferguson2024eighth-value_delta
v1
[reference]
rank 80
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.037 |
Ferguson2024gray_easy-value_delta
v1
[reference]
rank 204
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.706 |
Ferguson2024gray_hard-value_delta
v1
[reference]
rank 59
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.383 |
Ferguson2024half-value_delta
v1
[reference]
rank 136
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.888 |
Ferguson2024llh-value_delta
v1
[reference]
rank 42
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.836 |
Ferguson2024quarter-value_delta
v1
[reference]
rank 30
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.075 |
Ferguson2024round_f-value_delta
v1
[reference]
rank 191
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.275 |
Ferguson2024round_v-value_delta
v1
[reference]
rank 156
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.539 |
Ferguson2024tilted_line-value_delta
v1
[reference]
rank 125
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.330 |
Hebart2023-match
v1
rank 70
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.701 |
Maniquet2024
rank 42
2 benchmarks |
|
.677 |
Maniquet2024-confusion_similarity
v1
[reference]
rank 51
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.724 |
Maniquet2024-tasks_consistency
v1
[reference]
rank 17
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.326 |
Coggan2024_behavior-ConditionWiseAccuracySimilarity
v1
rank 92
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.169 |
engineering_vision
rank 237
25 benchmarks |
|
.138 |
ImageNet-C-top1
[reference]
rank 210
4 benchmarks |
|
.214 |
ImageNet-C-blur-top1
v2
[reference]
rank 161
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.339 |
ImageNet-C-digital-top1
v2
[reference]
rank 159
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.507 |
Geirhos2021-top1
[reference]
rank 173
17 benchmarks |
|
.936 |
Geirhos2021colour-top1
v1
[reference]
rank 167
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.667 |
Geirhos2021contrast-top1
v1
[reference]
rank 152
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.199 |
Geirhos2021cueconflict-top1
v1
[reference]
rank 154
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.294 |
Geirhos2021edge-top1
v1
[reference]
rank 104
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.448 |
Geirhos2021eidolonI-top1
v1
[reference]
rank 220
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.477 |
Geirhos2021eidolonII-top1
v1
[reference]
rank 179
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.452 |
Geirhos2021eidolonIII-top1
v1
[reference]
rank 204
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.936 |
Geirhos2021falsecolour-top1
v1
[reference]
rank 122
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.367 |
Geirhos2021highpass-top1
v1
[reference]
rank 141
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.349 |
Geirhos2021lowpass-top1
v1
[reference]
rank 188
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.562 |
Geirhos2021phasescrambling-top1
v1
[reference]
rank 170
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.639 |
Geirhos2021powerequalisation-top1
v1
[reference]
rank 152
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.624 |
Geirhos2021rotation-top1
v1
[reference]
rank 169
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.394 |
Geirhos2021silhouette-top1
v1
[reference]
rank 202
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.583 |
Geirhos2021sketch-top1
v1
[reference]
rank 156
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.374 |
Geirhos2021stylized-top1
v1
[reference]
rank 150
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.319 |
Geirhos2021uniformnoise-top1
v1
[reference]
rank 184
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.199 |
Hermann2020
[reference]
rank 181
2 benchmarks |
|
.242 |
Hermann2020cueconflict-shape_bias
v1
[reference]
rank 182
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.155 |
Hermann2020cueconflict-shape_match
v1
[reference]
rank 163
|
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
How to use
from brainscore_vision import load_model model = load_model("mobilenet_v2_1_0_224") model.start_task(...) model.start_recording(...) model.look_at(...)
Benchmarks bibtex
@Article{Freeman2013, author={Freeman, Jeremy and Ziemba, Corey M. and Heeger, David J. and Simoncelli, Eero P. and Movshon, J. Anthony}, title={A functional and perceptual signature of the second visual area in primates}, journal={Nature Neuroscience}, year={2013}, month={Jul}, day={01}, volume={16}, number={7}, pages={974-981}, abstract={The authors examined neuronal responses in V1 and V2 to synthetic texture stimuli that replicate higher-order statistical dependencies found in natural images. V2, but not V1, responded differentially to these textures, in both macaque (single neurons) and human (fMRI). Human detection of naturalistic structure in the same images was predicted by V2 responses, suggesting a role for V2 in representing natural image structure.}, issn={1546-1726}, doi={10.1038/nn.3402}, url={https://doi.org/10.1038/nn.3402} } @article {Marques2021.03.01.433495, author = {Marques, Tiago and Schrimpf, Martin and DiCarlo, James J.}, title = {Multi-scale hierarchical neural network models that bridge from single neurons in the primate primary visual cortex to object recognition behavior}, elocation-id = {2021.03.01.433495}, year = {2021}, doi = {10.1101/2021.03.01.433495}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Primate visual object recognition relies on the representations in cortical areas at the top of the ventral stream that are computed by a complex, hierarchical network of neural populations. While recent work has created reasonably accurate image-computable hierarchical neural network models of those neural stages, those models do not yet bridge between the properties of individual neurons and the overall emergent behavior of the ventral stream. One reason we cannot yet do this is that individual artificial neurons in multi-stage models have not been shown to be functionally similar to individual biological neurons. Here, we took an important first step by building and evaluating hundreds of hierarchical neural network models in how well their artificial single neurons approximate macaque primary visual cortical (V1) neurons. We found that single neurons in certain models are surprisingly similar to their biological counterparts and that the distributions of single neuron properties, such as those related to orientation and spatial frequency tuning, approximately match those in macaque V1. Critically, we observed that hierarchical models with V1 stages that better match macaque V1 at the single neuron level are also more aligned with human object recognition behavior. Finally, we show that an optimized classical neuroscientific model of V1 is more functionally similar to primate V1 than all of the tested multi-stage models, suggesting room for further model improvements with tangible payoffs in closer alignment to human behavior. These results provide the first multi-stage, multi-scale models that allow our field to ask precisely how the specific properties of individual V1 neurons relate to recognition behavior.HighlightsImage-computable hierarchical neural network models can be naturally extended to create hierarchical {\textquotedblleft}brain models{\textquotedblright} that allow direct comparison with biological neural networks at multiple scales {\textendash} from single neurons, to population of neurons, to behavior.Single neurons in some of these hierarchical brain models are functionally similar to single neurons in macaque primate visual cortex (V1)Some hierarchical brain models have processing stages in which the entire distribution of artificial neuron properties closely matches the biological distributions of those same properties in macaque V1Hierarchical brain models whose V1 processing stages better match the macaque V1 stage also tend to be more aligned with human object recognition behavior at their output stageCompeting Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495}, eprint = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495.full.pdf}, journal = {bioRxiv} } @article{Cavanaugh2002, author = {Cavanaugh, James R. and Bair, Wyeth and Movshon, J. A.}, doi = {10.1152/jn.00692.2001}, isbn = {0022-3077 (Print) 0022-3077 (Linking)}, issn = {0022-3077}, journal = {Journal of Neurophysiology}, mendeley-groups = {Benchmark effects/Done,Benchmark effects/*Surround Suppression}, number = {5}, pages = {2530--2546}, pmid = {12424292}, title = {{Nature and Interaction of Signals From the Receptive Field Center and Surround in Macaque V1 Neurons}}, url = {http://www.physiology.org/doi/10.1152/jn.00692.2001}, volume = {88}, year = {2002} } @article{Freeman2013, author = {Freeman, Jeremy and Ziemba, Corey M. and Heeger, David J. and Simoncelli, E. P. and Movshon, J. A.}, doi = {10.1038/nn.3402}, issn = {10976256}, journal = {Nature Neuroscience}, number = {7}, pages = {974--981}, pmid = {23685719}, publisher = {Nature Publishing Group}, title = {{A functional and perceptual signature of the second visual area in primates}}, url = {http://dx.doi.org/10.1038/nn.3402}, volume = {16}, year = {2013} } @article{Schiller1976, author = {Schiller, P. H. and Finlay, B. L. and Volman, S. F.}, doi = {10.1152/jn.1976.39.6.1352}, issn = {0022-3077}, journal = {Journal of neurophysiology}, number = {6}, pages = {1334--1351}, pmid = {825624}, title = {{Quantitative studies of single-cell properties in monkey striate cortex. III. Spatial Frequency}}, url = {http://www.ncbi.nlm.nih.gov/pubmed/825624}, volume = {39}, year = {1976} } @inproceedings{santurkar2019computer, title={Computer Vision with a Single (Robust) Classifier}, author={Shibani Santurkar and Dimitris Tsipras and Brandon Tran and Andrew Ilyas and Logan Engstrom and Aleksander Madry}, booktitle={ArXiv preprint arXiv:1906.09453}, year={2019} } @article {Majaj13402, author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.}, title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance}, volume = {35}, number = {39}, pages = {13402--13418}, year = {2015}, doi = {10.1523/JNEUROSCI.5181-14.2015}, publisher = {Society for Neuroscience}, abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({ extquotedblleft}face patches{ extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.}, issn = {0270-6474}, URL = {https://www.jneurosci.org/content/35/39/13402}, eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf}, journal = {Journal of Neuroscience}} @misc{Sanghavi_DiCarlo_2021, title={Sanghavi2020}, url={osf.io/chwdk}, DOI={10.17605/OSF.IO/CHWDK}, publisher={OSF}, author={Sanghavi, Sachi and DiCarlo, James J}, year={2021}, month={Nov} } @misc{Sanghavi_Jozwik_DiCarlo_2021, title={SanghaviJozwik2020}, url={osf.io/fhy36}, DOI={10.17605/OSF.IO/FHY36}, publisher={OSF}, author={Sanghavi, Sachi and Jozwik, Kamila M and DiCarlo, James J}, year={2021}, month={Nov} } @misc{Sanghavi_Murty_DiCarlo_2021, title={SanghaviMurty2020}, url={osf.io/fchme}, DOI={10.17605/OSF.IO/FCHME}, publisher={OSF}, author={Sanghavi, Sachi and Murty, N A R and DiCarlo, James J}, year={2021}, month={Nov} } @Article{Kar2019, author={Kar, Kohitij and Kubilius, Jonas and Schmidt, Kailyn and Issa, Elias B. and DiCarlo, James J.}, title={Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior}, journal={Nature Neuroscience}, year={2019}, month={Jun}, day={01}, volume={22}, number={6}, pages={974-983}, abstract={Non-recurrent deep convolutional neural networks (CNNs) are currently the best at modeling core object recognition, a behavior that is supported by the densely recurrent primate ventral stream, culminating in the inferior temporal (IT) cortex. If recurrence is critical to this behavior, then primates should outperform feedforward-only deep CNNs for images that require additional recurrent processing beyond the feedforward IT response. Here we first used behavioral methods to discover hundreds of these `challenge' images. Second, using large-scale electrophysiology, we observed that behaviorally sufficient object identity solutions emerged { extasciitilde}30{ hinspace}ms later in the IT cortex for challenge images compared with primate performance-matched `control' images. Third, these behaviorally critical late-phase IT response patterns were poorly predicted by feedforward deep CNN activations. Notably, very-deep CNNs and shallower recurrent CNNs better predicted these late IT responses, suggesting that there is a functional equivalence between additional nonlinear transformations and recurrence. Beyond arguing that recurrent circuits are critical for rapid object identification, our results provide strong constraints for future recurrent model development.}, issn={1546-1726}, doi={10.1038/s41593-019-0392-5}, url={https://doi.org/10.1038/s41593-019-0392-5} } @article{geirhos2021partial, title={Partial success in closing the gap between human and machine vision}, author={Geirhos, Robert and Narayanappa, Kantharaju and Mitzkus, Benjamin and Thieringer, Tizian and Bethge, Matthias and Wichmann, Felix A and Brendel, Wieland}, journal={Advances in Neural Information Processing Systems}, volume={34}, year={2021}, url={https://openreview.net/forum?id=QkljT4mrfs} } @article{BAKER2022104913, title = {Deep learning models fail to capture the configural nature of human shape perception}, journal = {iScience}, volume = {25}, number = {9}, pages = {104913}, year = {2022}, issn = {2589-0042}, doi = {https://doi.org/10.1016/j.isci.2022.104913}, url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853}, author = {Nicholas Baker and James H. Elder}, keywords = {Biological sciences, Neuroscience, Sensory neuroscience}, abstract = {Summary A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.} } @misc{ferguson_ngo_lee_dicarlo_schrimpf_2024, title={How Well is Visual Search Asymmetry predicted by a Binary-Choice, Rapid, Accuracy-based Visual-search, Oddball-detection (BRAVO) task?}, url={osf.io/5ba3n}, DOI={10.17605/OSF.IO/5BA3N}, publisher={OSF}, author={Ferguson, Michael E, Jr and Ngo, Jerry and Lee, Michael and DiCarlo, James and Schrimpf, Martin}, year={2024}, month={Jun} } @article {Maniquet2024.04.02.587669, author = {Maniquet, Tim and de Beeck, Hans Op and Costantino, Andrea Ivan}, title = {Recurrent issues with deep neural network models of visual recognition}, elocation-id = {2024.04.02.587669}, year = {2024}, doi = {10.1101/2024.04.02.587669}, publisher = {Cold Spring Harbor Laboratory}, URL = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669}, eprint = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669.full.pdf}, journal = {bioRxiv} } @ARTICLE{Hendrycks2019-di, title = "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations", author = "Hendrycks, Dan and Dietterich, Thomas", abstract = "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.", month = mar, year = 2019, archivePrefix = "arXiv", primaryClass = "cs.LG", eprint = "1903.12261", url = "https://arxiv.org/abs/1903.12261" } @article{hermann2020origins, title={The origins and prevalence of texture bias in convolutional neural networks}, author={Hermann, Katherine and Chen, Ting and Kornblith, Simon}, journal={Advances in Neural Information Processing Systems}, volume={33}, pages={19000--19015}, year={2020}, url={https://proceedings.neurips.cc/paper/2020/hash/db5f9f42a7157abe65bb145000b5871a-Abstract.html} }