resnet_101_v1
Scores on benchmarks
Model rank shown below is with respect to all public models..260 |
average_vision
rank 171
81 benchmarks |
.260
0
ceiling
best
median
|
.081 |
neural_vision
rank 439
38 benchmarks |
.081
0
ceiling
best
median
|
.127 |
V1
rank 356
24 benchmarks |
.127
0
ceiling
best
median
|
.357 |
Marques2020
[reference]
rank 352
22 benchmarks |
.357
0
ceiling
best
median
|
.498 |
V1-orientation
rank 347
7 benchmarks |
.498
0
ceiling
best
median
|
.931 |
Marques2020_DeValois1982-pref_or
v1
rank 210
|
.931
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.797 |
Marques2020_Ringach2002-cv_bandwidth_ratio
v1
rank 225
|
.797
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.812 |
Marques2020_Ringach2002-opr_cv_diff
v1
rank 254
|
.812
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.949 |
Marques2020_Ringach2002-or_bandwidth
v1
rank 15
|
.949
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.514 |
V1-receptive_field_size
rank 244
2 benchmarks |
.514
0
ceiling
best
median
|
.638 |
Marques2020_Cavanaugh2002-grating_summation_field
v1
[reference]
rank 210
|
.638
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.390 |
Marques2020_Cavanaugh2002-surround_diameter
v1
[reference]
rank 231
|
.390
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.336 |
V1-response_selectivity
rank 351
4 benchmarks |
.336
0
ceiling
best
median
|
.594 |
Marques2020_FreemanZiemba2013-texture_sparseness
v1
[reference]
rank 277
|
.594
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.749 |
Marques2020_FreemanZiemba2013-texture_variance_ratio
v1
[reference]
rank 141
|
.749
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.530 |
V1-spatial_frequency
rank 345
3 benchmarks |
.530
0
ceiling
best
median
|
.628 |
Marques2020_DeValois1982-peak_sf
v1
rank 243
|
.628
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.964 |
Marques2020_Schiller1976-sf_selective
v1
[reference]
rank 73
|
.964
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.621 |
V1-surround_modulation
rank 181
1 benchmark |
.621
0
ceiling
best
median
|
.621 |
Marques2020_Cavanaugh2002-surround_suppression_index
v1
[reference]
rank 181
|
.621
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.025 |
Coggan2024_fMRI.V1-rdm
v1
rank 147
|
.025
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.068 |
V2
rank 435
2 benchmarks |
.068
0
ceiling
best
median
|
.135 |
Coggan2024_fMRI.V2-rdm
v1
rank 60
|
.135
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.062 |
V4
rank 438
5 benchmarks |
.062
0
ceiling
best
median
|
.253 |
SanghaviMurty2020.V4-pls
v1
[reference]
rank 11
|
.253
0
ceiling
best
median
|
recordings from
46
sites in
V4
300 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.057 |
Coggan2024_fMRI.V4-rdm
v1
rank 70
|
.057
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.066 |
IT
rank 417
7 benchmarks |
.066
0
ceiling
best
median
|
.463 |
Coggan2024_fMRI.IT-rdm
v1
rank 71
|
.463
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.440 |
behavior_vision
rank 36
43 benchmarks |
.440
0
ceiling
best
median
|
.521 |
Rajalingham2018-i2n
v2
[reference]
rank 119
|
.521
0
ceiling
best
median
|
match-to-sample task
240 images ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.318 |
Geirhos2021-error_consistency
[reference]
rank 77
17 benchmarks |
.318
0
ceiling
best
median
|
.551 |
Geirhos2021colour-error_consistency
v1
[reference]
rank 57
|
.551
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.301 |
Geirhos2021contrast-error_consistency
v1
[reference]
rank 81
|
.301
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.271 |
Geirhos2021cueconflict-error_consistency
v1
[reference]
rank 71
|
.271
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.107 |
Geirhos2021edge-error_consistency
v1
[reference]
rank 83
|
.107
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.546 |
Geirhos2021eidolonI-error_consistency
v1
[reference]
rank 47
|
.546
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.470 |
Geirhos2021eidolonII-error_consistency
v1
[reference]
rank 76
|
.470
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.465 |
Geirhos2021eidolonIII-error_consistency
v1
[reference]
rank 55
|
.465
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.440 |
Geirhos2021falsecolour-error_consistency
v1
[reference]
rank 81
|
.440
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.059 |
Geirhos2021highpass-error_consistency
v1
[reference]
rank 135
|
.059
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.268 |
Geirhos2021lowpass-error_consistency
v1
[reference]
rank 75
|
.268
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.183 |
Geirhos2021phasescrambling-error_consistency
v1
[reference]
rank 84
|
.183
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.234 |
Geirhos2021powerequalisation-error_consistency
v1
[reference]
rank 71
|
.234
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.195 |
Geirhos2021rotation-error_consistency
v1
[reference]
rank 83
|
.195
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.549 |
Geirhos2021silhouette-error_consistency
v1
[reference]
rank 87
|
.549
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.155 |
Geirhos2021sketch-error_consistency
v1
[reference]
rank 72
|
.155
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.358 |
Geirhos2021stylized-error_consistency
v1
[reference]
rank 81
|
.358
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.253 |
Geirhos2021uniformnoise-error_consistency
v1
[reference]
rank 85
|
.253
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.651 |
Baker2022
rank 25
3 benchmarks |
.651
0
ceiling
best
median
|
.670 |
Baker2022fragmented-accuracy_delta
v1
[reference]
rank 61
|
.670
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.374 |
Baker2022frankenstein-accuracy_delta
v1
[reference]
rank 99
|
.374
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.908 |
Baker2022inverted-accuracy_delta
v1
[reference]
rank 21
|
.908
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.241 |
BMD2024
rank 36
4 benchmarks |
.241
0
ceiling
best
median
|
.281 |
BMD2024.dotted_1Behavioral-accuracy_distance
v1
rank 28
|
.281
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.182 |
BMD2024.dotted_2Behavioral-accuracy_distance
v1
rank 60
|
.182
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.217 |
BMD2024.texture_1Behavioral-accuracy_distance
v1
rank 68
|
.217
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.283 |
BMD2024.texture_2Behavioral-accuracy_distance
v1
rank 33
|
.283
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.375 |
Ferguson2024
[reference]
rank 177
14 benchmarks |
.375
0
ceiling
best
median
|
.291 |
Ferguson2024circle_line-value_delta
v1
[reference]
rank 120
|
.291
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.352 |
Ferguson2024color-value_delta
v1
[reference]
rank 171
|
.352
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.627 |
Ferguson2024convergence-value_delta
v1
[reference]
rank 55
|
.627
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.027 |
Ferguson2024eighth-value_delta
v1
[reference]
rank 187
|
.027
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.470 |
Ferguson2024gray_easy-value_delta
v1
[reference]
rank 79
|
.470
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.437 |
Ferguson2024gray_hard-value_delta
v1
[reference]
rank 110
|
.437
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.155 |
Ferguson2024half-value_delta
v1
[reference]
rank 193
|
.155
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.020 |
Ferguson2024juncture-value_delta
v1
[reference]
rank 187
|
.020
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.186 |
Ferguson2024lle-value_delta
v1
[reference]
rank 175
|
.186
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.643 |
Ferguson2024llh-value_delta
v1
[reference]
rank 85
|
.643
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.171 |
Ferguson2024quarter-value_delta
v1
[reference]
rank 155
|
.171
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.961 |
Ferguson2024round_f-value_delta
v1
[reference]
rank 11
|
.961
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.225 |
Ferguson2024round_v-value_delta
v1
[reference]
rank 182
|
.225
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.682 |
Ferguson2024tilted_line-value_delta
v1
[reference]
rank 71
|
.682
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.297 |
Hebart2023-match
v1
rank 98
|
.297
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.768 |
Maniquet2024
rank 17
2 benchmarks |
.768
0
ceiling
best
median
|
.831 |
Maniquet2024-confusion_similarity
v1
[reference]
rank 27
|
.831
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.705 |
Maniquet2024-tasks_consistency
v1
[reference]
rank 29
|
.705
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.351 |
Coggan2024_behavior-ConditionWiseAccuracySimilarity
v1
rank 84
|
.351
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.392 |
engineering_vision
rank 95
25 benchmarks |
.392
0
ceiling
best
median
|
.736 |
ImageNet-top1
v1
[reference]
rank 77
|
.736
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.434 |
ImageNet-C-top1
[reference]
rank 61
4 benchmarks |
.434
0
ceiling
best
median
|
.349 |
ImageNet-C-noise-top1
v2
[reference]
rank 91
|
.349
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.399 |
ImageNet-C-blur-top1
v2
[reference]
rank 45
|
.399
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.481 |
ImageNet-C-weather-top1
v2
[reference]
rank 62
|
.481
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.507 |
ImageNet-C-digital-top1
v2
[reference]
rank 66
|
.507
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.583 |
Geirhos2021-top1
[reference]
rank 88
17 benchmarks |
.583
0
ceiling
best
median
|
.977 |
Geirhos2021colour-top1
v1
[reference]
rank 79
|
.977
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.839 |
Geirhos2021contrast-top1
v1
[reference]
rank 101
|
.839
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.199 |
Geirhos2021cueconflict-top1
v1
[reference]
rank 154
|
.199
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.237 |
Geirhos2021edge-top1
v1
[reference]
rank 174
|
.237
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.541 |
Geirhos2021eidolonI-top1
v1
[reference]
rank 53
|
.541
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.578 |
Geirhos2021eidolonII-top1
v1
[reference]
rank 40
|
.578
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.583 |
Geirhos2021eidolonIII-top1
v1
[reference]
rank 72
|
.583
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.975 |
Geirhos2021falsecolour-top1
v1
[reference]
rank 39
|
.975
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.375 |
Geirhos2021highpass-top1
v1
[reference]
rank 133
|
.375
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.474 |
Geirhos2021lowpass-top1
v1
[reference]
rank 70
|
.474
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.656 |
Geirhos2021phasescrambling-top1
v1
[reference]
rank 69
|
.656
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.766 |
Geirhos2021powerequalisation-top1
v1
[reference]
rank 94
|
.766
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.680 |
Geirhos2021rotation-top1
v1
[reference]
rank 121
|
.680
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.494 |
Geirhos2021silhouette-top1
v1
[reference]
rank 134
|
.494
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.631 |
Geirhos2021sketch-top1
v1
[reference]
rank 94
|
.631
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.417 |
Geirhos2021stylized-top1
v1
[reference]
rank 88
|
.417
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.487 |
Geirhos2021uniformnoise-top1
v1
[reference]
rank 100
|
.487
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.206 |
Hermann2020
[reference]
rank 171
2 benchmarks |
.206
0
ceiling
best
median
|
.254 |
Hermann2020cueconflict-shape_bias
v1
[reference]
rank 173
|
.254
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
.157 |
Hermann2020cueconflict-shape_match
v1
[reference]
rank 158
|
.157
0
ceiling
best
median
|
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
How to use
from brainscore_vision import load_model model = load_model("resnet_101_v1") model.start_task(...) model.start_recording(...) model.look_at(...)
Benchmarks bibtex
@Article{Freeman2013, author={Freeman, Jeremy and Ziemba, Corey M. and Heeger, David J. and Simoncelli, Eero P. and Movshon, J. Anthony}, title={A functional and perceptual signature of the second visual area in primates}, journal={Nature Neuroscience}, year={2013}, month={Jul}, day={01}, volume={16}, number={7}, pages={974-981}, abstract={The authors examined neuronal responses in V1 and V2 to synthetic texture stimuli that replicate higher-order statistical dependencies found in natural images. V2, but not V1, responded differentially to these textures, in both macaque (single neurons) and human (fMRI). Human detection of naturalistic structure in the same images was predicted by V2 responses, suggesting a role for V2 in representing natural image structure.}, issn={1546-1726}, doi={10.1038/nn.3402}, url={https://doi.org/10.1038/nn.3402} } @article {Marques2021.03.01.433495, author = {Marques, Tiago and Schrimpf, Martin and DiCarlo, James J.}, title = {Multi-scale hierarchical neural network models that bridge from single neurons in the primate primary visual cortex to object recognition behavior}, elocation-id = {2021.03.01.433495}, year = {2021}, doi = {10.1101/2021.03.01.433495}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Primate visual object recognition relies on the representations in cortical areas at the top of the ventral stream that are computed by a complex, hierarchical network of neural populations. While recent work has created reasonably accurate image-computable hierarchical neural network models of those neural stages, those models do not yet bridge between the properties of individual neurons and the overall emergent behavior of the ventral stream. One reason we cannot yet do this is that individual artificial neurons in multi-stage models have not been shown to be functionally similar to individual biological neurons. Here, we took an important first step by building and evaluating hundreds of hierarchical neural network models in how well their artificial single neurons approximate macaque primary visual cortical (V1) neurons. We found that single neurons in certain models are surprisingly similar to their biological counterparts and that the distributions of single neuron properties, such as those related to orientation and spatial frequency tuning, approximately match those in macaque V1. Critically, we observed that hierarchical models with V1 stages that better match macaque V1 at the single neuron level are also more aligned with human object recognition behavior. Finally, we show that an optimized classical neuroscientific model of V1 is more functionally similar to primate V1 than all of the tested multi-stage models, suggesting room for further model improvements with tangible payoffs in closer alignment to human behavior. These results provide the first multi-stage, multi-scale models that allow our field to ask precisely how the specific properties of individual V1 neurons relate to recognition behavior.HighlightsImage-computable hierarchical neural network models can be naturally extended to create hierarchical {\textquotedblleft}brain models{\textquotedblright} that allow direct comparison with biological neural networks at multiple scales {\textendash} from single neurons, to population of neurons, to behavior.Single neurons in some of these hierarchical brain models are functionally similar to single neurons in macaque primate visual cortex (V1)Some hierarchical brain models have processing stages in which the entire distribution of artificial neuron properties closely matches the biological distributions of those same properties in macaque V1Hierarchical brain models whose V1 processing stages better match the macaque V1 stage also tend to be more aligned with human object recognition behavior at their output stageCompeting Interest StatementThe authors have declared no competing interest.}, URL = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495}, eprint = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495.full.pdf}, journal = {bioRxiv} } @article{Cavanaugh2002, author = {Cavanaugh, James R. and Bair, Wyeth and Movshon, J. A.}, doi = {10.1152/jn.00692.2001}, isbn = {0022-3077 (Print) 0022-3077 (Linking)}, issn = {0022-3077}, journal = {Journal of Neurophysiology}, mendeley-groups = {Benchmark effects/Done,Benchmark effects/*Surround Suppression}, number = {5}, pages = {2530--2546}, pmid = {12424292}, title = {{Nature and Interaction of Signals From the Receptive Field Center and Surround in Macaque V1 Neurons}}, url = {http://www.physiology.org/doi/10.1152/jn.00692.2001}, volume = {88}, year = {2002} } @article{Freeman2013, author = {Freeman, Jeremy and Ziemba, Corey M. and Heeger, David J. and Simoncelli, E. P. and Movshon, J. A.}, doi = {10.1038/nn.3402}, issn = {10976256}, journal = {Nature Neuroscience}, number = {7}, pages = {974--981}, pmid = {23685719}, publisher = {Nature Publishing Group}, title = {{A functional and perceptual signature of the second visual area in primates}}, url = {http://dx.doi.org/10.1038/nn.3402}, volume = {16}, year = {2013} } @article{Schiller1976, author = {Schiller, P. H. and Finlay, B. L. and Volman, S. F.}, doi = {10.1152/jn.1976.39.6.1352}, issn = {0022-3077}, journal = {Journal of neurophysiology}, number = {6}, pages = {1334--1351}, pmid = {825624}, title = {{Quantitative studies of single-cell properties in monkey striate cortex. III. Spatial Frequency}}, url = {http://www.ncbi.nlm.nih.gov/pubmed/825624}, volume = {39}, year = {1976} } @inproceedings{santurkar2019computer, title={Computer Vision with a Single (Robust) Classifier}, author={Shibani Santurkar and Dimitris Tsipras and Brandon Tran and Andrew Ilyas and Logan Engstrom and Aleksander Madry}, booktitle={ArXiv preprint arXiv:1906.09453}, year={2019} } @misc{Sanghavi_Murty_DiCarlo_2021, title={SanghaviMurty2020}, url={osf.io/fchme}, DOI={10.17605/OSF.IO/FCHME}, publisher={OSF}, author={Sanghavi, Sachi and Murty, N A R and DiCarlo, James J}, year={2021}, month={Nov} } @article {Rajalingham240614, author = {Rajalingham, Rishi and Issa, Elias B. and Bashivan, Pouya and Kar, Kohitij and Schmidt, Kailyn and DiCarlo, James J.}, title = {Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks}, elocation-id = {240614}, year = {2018}, doi = {10.1101/240614}, publisher = {Cold Spring Harbor Laboratory}, abstract = {Primates{ extemdash}including humans{ extemdash}can typically recognize objects in visual images at a glance even in the face of naturally occurring identity-preserving image transformations (e.g. changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected over one million behavioral trials for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feed-forward convolutional ANNs trained for visual categorization (termed DCNNIC models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNNIC models were significantly non-predictive of primate performance, and that this prediction failure was not accounted for by simple image attributes, nor rescued by simple model modifications. These results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates, and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks{ extemdash}such as those obtained here{ extemdash}could serve as direct guides for discovering such models.SIGNIFICANCE STATEMENT Recently, specific feed-forward deep convolutional artificial neural networks (ANNs) models have dramatically advanced our quantitative understanding of the neural mechanisms underlying primate core object recognition. In this work, we tested the limits of those ANNs by systematically comparing the behavioral responses of these models with the behavioral responses of humans and monkeys, at the resolution of individual images. Using these high-resolution metrics, we found that all tested ANN models significantly diverged from primate behavior. Going forward, these high-resolution, large-scale primate behavioral benchmarks could serve as direct guides for discovering better ANN models of the primate visual system.}, URL = {https://www.biorxiv.org/content/early/2018/02/12/240614}, eprint = {https://www.biorxiv.org/content/early/2018/02/12/240614.full.pdf}, journal = {bioRxiv} } @article{geirhos2021partial, title={Partial success in closing the gap between human and machine vision}, author={Geirhos, Robert and Narayanappa, Kantharaju and Mitzkus, Benjamin and Thieringer, Tizian and Bethge, Matthias and Wichmann, Felix A and Brendel, Wieland}, journal={Advances in Neural Information Processing Systems}, volume={34}, year={2021}, url={https://openreview.net/forum?id=QkljT4mrfs} } @article{BAKER2022104913, title = {Deep learning models fail to capture the configural nature of human shape perception}, journal = {iScience}, volume = {25}, number = {9}, pages = {104913}, year = {2022}, issn = {2589-0042}, doi = {https://doi.org/10.1016/j.isci.2022.104913}, url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853}, author = {Nicholas Baker and James H. Elder}, keywords = {Biological sciences, Neuroscience, Sensory neuroscience}, abstract = {Summary A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.} } @misc{ferguson_ngo_lee_dicarlo_schrimpf_2024, title={How Well is Visual Search Asymmetry predicted by a Binary-Choice, Rapid, Accuracy-based Visual-search, Oddball-detection (BRAVO) task?}, url={osf.io/5ba3n}, DOI={10.17605/OSF.IO/5BA3N}, publisher={OSF}, author={Ferguson, Michael E, Jr and Ngo, Jerry and Lee, Michael and DiCarlo, James and Schrimpf, Martin}, year={2024}, month={Jun} } @article {Maniquet2024.04.02.587669, author = {Maniquet, Tim and de Beeck, Hans Op and Costantino, Andrea Ivan}, title = {Recurrent issues with deep neural network models of visual recognition}, elocation-id = {2024.04.02.587669}, year = {2024}, doi = {10.1101/2024.04.02.587669}, publisher = {Cold Spring Harbor Laboratory}, URL = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669}, eprint = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669.full.pdf}, journal = {bioRxiv} } @INPROCEEDINGS{5206848, author={J. {Deng} and W. {Dong} and R. {Socher} and L. {Li} and {Kai Li} and {Li Fei-Fei}}, booktitle={2009 IEEE Conference on Computer Vision and Pattern Recognition}, title={ImageNet: A large-scale hierarchical image database}, year={2009}, volume={}, number={}, pages={248-255}, } @ARTICLE{Hendrycks2019-di, title = "Benchmarking Neural Network Robustness to Common Corruptions and Perturbations", author = "Hendrycks, Dan and Dietterich, Thomas", abstract = "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.", month = mar, year = 2019, archivePrefix = "arXiv", primaryClass = "cs.LG", eprint = "1903.12261", url = "https://arxiv.org/abs/1903.12261" } @article{hermann2020origins, title={The origins and prevalence of texture bias in convolutional neural networks}, author={Hermann, Katherine and Chen, Ting and Kornblith, Simon}, journal={Advances in Neural Information Processing Systems}, volume={33}, pages={19000--19015}, year={2020}, url={https://proceedings.neurips.cc/paper/2020/hash/db5f9f42a7157abe65bb145000b5871a-Abstract.html} }
Loading leaderboard...
0%