Scores on benchmarks

Model rank shown below is with respect to all public models.
.304 average_vision rank 95
99 benchmarks
.304
0
ceiling
best
median
.355 neural_vision rank 20
56 benchmarks
.355
0
ceiling
best
median
.427 V1 rank 47
28 benchmarks
.427
0
ceiling
best
median
.202 Allen2022_fmri_surface.V1-rdm v1 [reference] rank 46
.202
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.431 Allen2022_fmri_surface.V1-ridge v1 [reference] rank 68
.431
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.263 FreemanZiemba2013.V1-pls v3 [reference] rank 65
.263
0
ceiling
best
median
recordings from 102 sites in V1
315 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.566 Hebart2023_fmri.V1-ridgecv v3 rank 70
.566
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.700 Marques2020 [reference] rank 237
22 benchmarks
.700
0
ceiling
best
median
.878 V1-orientation rank 131
7 benchmarks
.878
0
ceiling
best
median
.970 Marques2020_DeValois1982-pref_or v1 rank 111
.970
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.867 Marques2020_Ringach2002-circular_variance v1 rank 119
.867
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.681 Marques2020_Ringach2002-cv_bandwidth_ratio v1 rank 344
.681
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.936 Marques2020_Ringach2002-opr_cv_diff v1 rank 71
.936
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.838 Marques2020_Ringach2002-or_bandwidth v1 rank 222
.838
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.999 Marques2020_Ringach2002-or_selective v1 rank 20
.999
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.856 Marques2020_Ringach2002-orth_pref_ratio v1 rank 120
.856
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.490 V1-receptive_field_size rank 296
2 benchmarks
.490
0
ceiling
best
median
.592 Marques2020_Cavanaugh2002-grating_summation_field v1 [reference] rank 270
.592
0
ceiling
best
median

2304 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.389 Marques2020_Cavanaugh2002-surround_diameter v1 [reference] rank 272
.389
0
ceiling
best
median

2304 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.819 V1-response_magnitude rank 276
3 benchmarks
.819
0
ceiling
best
median
.733 Marques2020_FreemanZiemba2013-max_noise v1 [reference] rank 282
.733
0
ceiling
best
median

450 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.789 Marques2020_FreemanZiemba2013-max_texture v1 [reference] rank 283
.789
0
ceiling
best
median

450 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.933 Marques2020_Ringach2002-max_dc v1 rank 251
.933
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.671 V1-response_selectivity rank 171
4 benchmarks
.671
0
ceiling
best
median
.841 Marques2020_FreemanZiemba2013-texture_selectivity v1 [reference] rank 54
.841
0
ceiling
best
median

450 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.780 Marques2020_FreemanZiemba2013-texture_sparseness v1 [reference] rank 111
.780
0
ceiling
best
median

450 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.756 Marques2020_FreemanZiemba2013-texture_variance_ratio v1 [reference] rank 141
.756
0
ceiling
best
median

450 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.308 Marques2020_Ringach2002-modulation_ratio v1 rank 356
.308
0
ceiling
best
median

1152 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.762 V1-spatial_frequency rank 250
3 benchmarks
.762
0
ceiling
best
median
.509 Marques2020_DeValois1982-peak_sf v1 rank 363
.509
0
ceiling
best
median

2112 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.938 Marques2020_Schiller1976-sf_bandwidth v1 [reference] rank 45
.938
0
ceiling
best
median

2112 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.840 Marques2020_Schiller1976-sf_selective v1 [reference] rank 213
.840
0
ceiling
best
median

2112 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.581 V1-surround_modulation rank 261
1 benchmark
.581
0
ceiling
best
median
.581 Marques2020_Cavanaugh2002-surround_suppression_index v1 [reference] rank 261
.581
0
ceiling
best
median

2304 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.701 V1-texture_modulation rank 106
2 benchmarks
.701
0
ceiling
best
median
.652 Marques2020_FreemanZiemba2013-abs_texture_modulation_index v1 [reference] rank 98
.652
0
ceiling
best
median

450 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.750 Marques2020_FreemanZiemba2013-texture_modulation_index v1 [reference] rank 139
.750
0
ceiling
best
median

450 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.767 Papale2025.V1-ridgecv v3 [reference] rank 40
.767
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.056 Coggan2024_fMRI.V1-rdm v1 rank 128
.056
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.262 V2 rank 97
5 benchmarks
.262
0
ceiling
best
median
.148 Allen2022_fmri_surface.V2-rdm v1 [reference] rank 97
.148
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.424 Allen2022_fmri_surface.V2-ridge v1 [reference] rank 80
.424
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.192 FreemanZiemba2013.V2-pls v3 [reference] rank 420
.192
0
ceiling
best
median
recordings from 103 sites in V2
315 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.524 Hebart2023_fmri.V2-ridgecv v3 rank 93
.524
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.023 Coggan2024_fMRI.V2-rdm v1 rank 227
.023
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.348 V4 rank 10
10 benchmarks
.348
0
ceiling
best
median
.243 Allen2022_fmri_surface.V4-rdm v1 [reference] rank 23
.243
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.418 Allen2022_fmri_surface.V4-ridge v1 [reference] rank 31
.418
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.344 Hebart2023_fmri.V4-ridgecv v3 rank 15
.344
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.026 MajajHong2015public.V4-reverse_pls v4 [reference] rank 33
.026
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.536 MajajHong2015.V4-pls v4 [reference] rank 55
.536
0
ceiling
best
median
recordings from 88 sites in V4
2560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.586 Papale2025.V4-ridgecv v3 [reference] rank 18
.586
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.584 Sanghavi2020.V4-pls v2 [reference] rank 18
.584
0
ceiling
best
median
recordings from 47 sites in V4
5760 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.483 SanghaviJozwik2020.V4-pls v2 [reference] rank 68
.483
0
ceiling
best
median
recordings from 50 sites in V4
4916 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.242 SanghaviMurty2020.V4-pls v2 [reference] rank 26
.242
0
ceiling
best
median
recordings from 46 sites in V4
300 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.014 Coggan2024_fMRI.V4-rdm v1 rank 183
.014
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.384 IT rank 3
13 benchmarks
.384
0
ceiling
best
median
.350 Allen2022_fmri_surface.IT-rdm v1 [reference] rank 31
.350
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.476 Allen2022_fmri_surface.IT-ridge v1 [reference] rank 54
.476
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.390 Bracci2019.anteriorVTC-rdm v1 rank 7
.390
0
ceiling
best
median

27 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.252 Gifford2022.IT-ridgecv v3 [reference] rank 18
.252
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.301 Hebart2023_fmri.IT-ridgecv v3 rank 42
.301
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.276 Kar2019-ost v2 [reference] rank 6
.276
0
ceiling
best
median
recordings from 424 sites in IT
1318 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.459 MajajHong2015.IT-pls v4 [reference] rank 19
.459
0
ceiling
best
median
recordings from 168 sites in IT
2560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.059 MajajHong2015public.IT-reverse_pls v4 [reference] rank 65
.059
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.562 Papale2025.IT-ridgecv v3 [reference] rank 30
.562
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.473 Sanghavi2020.IT-pls v2 [reference] rank 47
.473
0
ceiling
best
median
recordings from 88 sites in IT
5760 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.445 SanghaviJozwik2020.IT-pls v2 [reference] rank 99
.445
0
ceiling
best
median
recordings from 26 sites in IT
4916 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.365 SanghaviMurty2020.IT-pls v2 [reference] rank 36
.365
0
ceiling
best
median
recordings from 29 sites in IT
300 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.586 Coggan2024_fMRI.IT-rdm v1 rank 53
.586
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.253 behavior_vision rank 208
43 benchmarks
.253
0
ceiling
best
median
.545 Rajalingham2018-i2n v2 [reference] rank 78
.545
0
ceiling
best
median
match-to-sample task
240 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.289 Ferguson2024 [reference] rank 274
14 benchmarks
.289
0
ceiling
best
median
.068 Ferguson2024circle_line-value_delta v1 [reference] rank 254
.068
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.437 Ferguson2024color-value_delta v1 [reference] rank 188
.437
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.627 Ferguson2024convergence-value_delta v1 [reference] rank 68
.627
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.027 Ferguson2024eighth-value_delta v1 [reference] rank 244
.027
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.098 Ferguson2024gray_easy-value_delta v1 [reference] rank 219
.098
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.641 Ferguson2024gray_hard-value_delta v1 [reference] rank 98
.641
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.155 Ferguson2024half-value_delta v1 [reference] rank 247
.155
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.191 Ferguson2024juncture-value_delta v1 [reference] rank 116
.191
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.358 Ferguson2024lle-value_delta v1 [reference] rank 160
.358
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.062 Ferguson2024llh-value_delta v1 [reference] rank 265
.062
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.171 Ferguson2024quarter-value_delta v1 [reference] rank 194
.171
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.208 Ferguson2024round_f-value_delta v1 [reference] rank 191
.208
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.609 Ferguson2024round_v-value_delta v1 [reference] rank 110
.609
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.401 Ferguson2024tilted_line-value_delta v1 [reference] rank 207
.401
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.349 Hebart2023-match v1 rank 62
.349
0
ceiling
best
median

1854 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.727 Maniquet2024 rank 48
2 benchmarks
.727
0
ceiling
best
median
.798 Maniquet2024-confusion_similarity v1 [reference] rank 49
.798
0
ceiling
best
median

13600 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.656 Maniquet2024-tasks_consistency v1 [reference] rank 130
.656
0
ceiling
best
median

13600 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.111 Coggan2024_behavior-ConditionWiseAccuracySimilarity v1 rank 193
.111
0
ceiling
best
median

22560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.129 engineering_vision rank 288
25 benchmarks
.129
0
ceiling
best
median
.644 ImageNet-top1 v1 [reference] rank 191
.644
0
ceiling
best
median

50000 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_model
model = load_model("ReAlnet03_cornet")
model.start_task(...)
model.start_recording(...)
model.look_at(...)

Brain Encoding Response Generator (BERG)

Through the BERG you can easily generate neural responses to images of your choice using any Brain-Score vision model.

For more information on how to use BERG, see the documentation and tutorial.

Benchmarks bibtex

@article{allen_massive_2022,
    title = {A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence},
    volume = {25},
    issn = {1097-6256},
    doi = {10.1038/s41593-021-00962-x},
    journal = {Nature Neuroscience},
    author = {Allen, Emily J. and St-Yves, Ghislain and Wu, Yihan and Breedlove, Jesse L.
              and Prince, Jacob S. and Dowdle, Logan T. and Nau, Matthias and Caron, Brad
              and Pestilli, Franco and Charest, Ian and Hutchinson, J. Benjamin
              and Naselaris, Thomas and Kay, Kendrick},
    year = {2022},
    pages = {116--126},
}
        @Article{Freeman2013,
                author={Freeman, Jeremy
                and Ziemba, Corey M.
                and Heeger, David J.
                and Simoncelli, Eero P.
                and Movshon, J. Anthony},
                title={A functional and perceptual signature of the second visual area in primates},
                journal={Nature Neuroscience},
                year={2013},
                month={Jul},
                day={01},
                volume={16},
                number={7},
                pages={974-981},
                abstract={The authors examined neuronal responses in V1 and V2 to synthetic texture stimuli that replicate higher-order statistical dependencies found in natural images. V2, but not V1, responded differentially to these textures, in both macaque (single neurons) and human (fMRI). Human detection of naturalistic structure in the same images was predicted by V2 responses, suggesting a role for V2 in representing natural image structure.},
                issn={1546-1726},
                doi={10.1038/nn.3402},
                url={https://doi.org/10.1038/nn.3402}
                }
        @article {Marques2021.03.01.433495,
	author = {Marques, Tiago and Schrimpf, Martin and DiCarlo, James J.},
	title = {Multi-scale hierarchical neural network models that bridge from single neurons in the primate primary visual cortex to object recognition behavior},
	elocation-id = {2021.03.01.433495},
	year = {2021},
	doi = {10.1101/2021.03.01.433495},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Primate visual object recognition relies on the representations in cortical areas at the top of the ventral stream that are computed by a complex, hierarchical network of neural populations. While recent work has created reasonably accurate image-computable hierarchical neural network models of those neural stages, those models do not yet bridge between the properties of individual neurons and the overall emergent behavior of the ventral stream. One reason we cannot yet do this is that individual artificial neurons in multi-stage models have not been shown to be functionally similar to individual biological neurons. Here, we took an important first step by building and evaluating hundreds of hierarchical neural network models in how well their artificial single neurons approximate macaque primary visual cortical (V1) neurons. We found that single neurons in certain models are surprisingly similar to their biological counterparts and that the distributions of single neuron properties, such as those related to orientation and spatial frequency tuning, approximately match those in macaque V1. Critically, we observed that hierarchical models with V1 stages that better match macaque V1 at the single neuron level are also more aligned with human object recognition behavior. Finally, we show that an optimized classical neuroscientific model of V1 is more functionally similar to primate V1 than all of the tested multi-stage models, suggesting room for further model improvements with tangible payoffs in closer alignment to human behavior. These results provide the first multi-stage, multi-scale models that allow our field to ask precisely how the specific properties of individual V1 neurons relate to recognition behavior.HighlightsImage-computable hierarchical neural network models can be naturally extended to create hierarchical {\textquotedblleft}brain models{\textquotedblright} that allow direct comparison with biological neural networks at multiple scales {\textendash} from single neurons, to population of neurons, to behavior.Single neurons in some of these hierarchical brain models are functionally similar to single neurons in macaque primate visual cortex (V1)Some hierarchical brain models have processing stages in which the entire distribution of artificial neuron properties closely matches the biological distributions of those same properties in macaque V1Hierarchical brain models whose V1 processing stages better match the macaque V1 stage also tend to be more aligned with human object recognition behavior at their output stageCompeting Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495},
	eprint = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495.full.pdf},
	journal = {bioRxiv}
}
        @article{Cavanaugh2002,
            author = {Cavanaugh, James R. and Bair, Wyeth and Movshon, J. A.},
            doi = {10.1152/jn.00692.2001},
            isbn = {0022-3077 (Print) 0022-3077 (Linking)},
            issn = {0022-3077},
            journal = {Journal of Neurophysiology},
            mendeley-groups = {Benchmark effects/Done,Benchmark effects/*Surround Suppression},
            number = {5},
            pages = {2530--2546},
            pmid = {12424292},
            title = {{Nature and Interaction of Signals From the Receptive Field Center and Surround in Macaque V1 Neurons}},
            url = {http://www.physiology.org/doi/10.1152/jn.00692.2001},
            volume = {88},
            year = {2002}
            }
        @article{Freeman2013,
            author = {Freeman, Jeremy and Ziemba, Corey M. and Heeger, David J. and Simoncelli, E. P. and Movshon, J. A.},
            doi = {10.1038/nn.3402},
            issn = {10976256},
            journal = {Nature Neuroscience},
            number = {7},
            pages = {974--981},
            pmid = {23685719},
            publisher = {Nature Publishing Group},
            title = {{A functional and perceptual signature of the second visual area in primates}},
            url = {http://dx.doi.org/10.1038/nn.3402},
            volume = {16},
            year = {2013}
            }
        @article{Schiller1976,
            author = {Schiller, P. H. and Finlay, B. L. and Volman, S. F.},
            doi = {10.1152/jn.1976.39.6.1352},
            issn = {0022-3077},
            journal = {Journal of neurophysiology},
            number = {6},
            pages = {1334--1351},
            pmid = {825624},
            title = {{Quantitative studies of single-cell properties in monkey striate cortex. III. Spatial Frequency}},
            url = {http://www.ncbi.nlm.nih.gov/pubmed/825624},
            volume = {39},
            year = {1976}
            }
        @article{papale_extensive_2025,
	title = {An extensive dataset of spiking activity to reveal the syntax of the ventral stream},
	volume = {113},
	issn = {08966273},
	url = {https://linkinghub.elsevier.com/retrieve/pii/S089662732400881X},
	doi = {10.1016/j.neuron.2024.12.003},
	journal = {Neuron},
	author = {Papale, Paolo and Wang, Feng and Self, Matthew W. and Roelfsema, Pieter R.},
	year = {2025},
}
        @inproceedings{santurkar2019computer,
    title={Computer Vision with a Single (Robust) Classifier},
    author={Shibani Santurkar and Dimitris Tsipras and Brandon Tran and Andrew Ilyas and Logan Engstrom and Aleksander Madry},
    booktitle={ArXiv preprint arXiv:1906.09453},
    year={2019}
}
        @article{muzellec_reverse_2026,
      title = {Reverse predictivity for bidirectional comparison of neural networks and biological brains},
      volume = {8},
      issn = {2522-5839},
      url = {https://doi.org/10.1038/s42256-026-01204-0},
      doi = {10.1038/s42256-026-01204-0},
      number = {3},
      journal = {Nature Machine Intelligence},
      author = {Muzellec, Sabine and Kar, Kohitij},
      month = mar,
      year = {2026},
      pages = {474--488},
}
        @article {Majaj13402,
            author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.},
            title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance},
            volume = {35},
            number = {39},
            pages = {13402--13418},
            year = {2015},
            doi = {10.1523/JNEUROSCI.5181-14.2015},
            publisher = {Society for Neuroscience},
            abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({	extquotedblleft}face patches{	extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.},
            issn = {0270-6474},
            URL = {https://www.jneurosci.org/content/35/39/13402},
            eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf},
            journal = {Journal of Neuroscience}}
        @misc{Sanghavi_DiCarlo_2021,
  title={Sanghavi2020},
  url={osf.io/chwdk},
  DOI={10.17605/OSF.IO/CHWDK},
  publisher={OSF},
  author={Sanghavi, Sachi and DiCarlo, James J},
  year={2021},
  month={Nov}
}
        @misc{Sanghavi_Jozwik_DiCarlo_2021,
  title={SanghaviJozwik2020},
  url={osf.io/fhy36},
  DOI={10.17605/OSF.IO/FHY36},
  publisher={OSF},
  author={Sanghavi, Sachi and Jozwik, Kamila M and DiCarlo, James J},
  year={2021},
  month={Nov}
}
        @misc{Sanghavi_Murty_DiCarlo_2021,
  title={SanghaviMurty2020},
  url={osf.io/fchme},
  DOI={10.17605/OSF.IO/FCHME},
  publisher={OSF},
  author={Sanghavi, Sachi and Murty, N A R and DiCarlo, James J},
  year={2021},
  month={Nov}
}
        @article{gifford_large_2022,
	title = {A large and rich {EEG} dataset for modeling human visual object recognition},
	volume = {264},
	issn = {10538119},
	url = {https://linkinghub.elsevier.com/retrieve/pii/S1053811922008758},
	doi = {10.1016/j.neuroimage.2022.119754},
	journal = {NeuroImage},
	author = {Gifford, Alessandro T. and Dwivedi, Kshitij and Roig, Gemma and Cichy, Radoslaw M.},
	year = {2022},
}
        @Article{Kar2019,
                                                    author={Kar, Kohitij
                                                    and Kubilius, Jonas
                                                    and Schmidt, Kailyn
                                                    and Issa, Elias B.
                                                    and DiCarlo, James J.},
                                                    title={Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior},
                                                    journal={Nature Neuroscience},
                                                    year={2019},
                                                    month={Jun},
                                                    day={01},
                                                    volume={22},
                                                    number={6},
                                                    pages={974-983},
                                                    abstract={Non-recurrent deep convolutional neural networks (CNNs) are currently the best at modeling core object recognition, a behavior that is supported by the densely recurrent primate ventral stream, culminating in the inferior temporal (IT) cortex. If recurrence is critical to this behavior, then primates should outperform feedforward-only deep CNNs for images that require additional recurrent processing beyond the feedforward IT response. Here we first used behavioral methods to discover hundreds of these `challenge' images. Second, using large-scale electrophysiology, we observed that behaviorally sufficient object identity solutions emerged {	extasciitilde}30{	hinspace}ms later in the IT cortex for challenge images compared with primate performance-matched `control' images. Third, these behaviorally critical late-phase IT response patterns were poorly predicted by feedforward deep CNN activations. Notably, very-deep CNNs and shallower recurrent CNNs better predicted these late IT responses, suggesting that there is a functional equivalence between additional nonlinear transformations and recurrence. Beyond arguing that recurrent circuits are critical for rapid object identification, our results provide strong constraints for future recurrent model development.},
                                                    issn={1546-1726},
                                                    doi={10.1038/s41593-019-0392-5},
                                                    url={https://doi.org/10.1038/s41593-019-0392-5}
                                                    }
        @article {Rajalingham240614,
                author = {Rajalingham, Rishi and Issa, Elias B. and Bashivan, Pouya and Kar, Kohitij and Schmidt, Kailyn and DiCarlo, James J.},
                title = {Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks},
                elocation-id = {240614},
                year = {2018},
                doi = {10.1101/240614},
                publisher = {Cold Spring Harbor Laboratory},
                abstract = {Primates{	extemdash}including humans{	extemdash}can typically recognize objects in visual images at a glance even in the face of naturally occurring identity-preserving image transformations (e.g. changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected over one million behavioral trials for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feed-forward convolutional ANNs trained for visual categorization (termed DCNNIC models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNNIC models were significantly non-predictive of primate performance, and that this prediction failure was not accounted for by simple image attributes, nor rescued by simple model modifications. These results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates, and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks{	extemdash}such as those obtained here{	extemdash}could serve as direct guides for discovering such models.SIGNIFICANCE STATEMENT Recently, specific feed-forward deep convolutional artificial neural networks (ANNs) models have dramatically advanced our quantitative understanding of the neural mechanisms underlying primate core object recognition. In this work, we tested the limits of those ANNs by systematically comparing the behavioral responses of these models with the behavioral responses of humans and monkeys, at the resolution of individual images. Using these high-resolution metrics, we found that all tested ANN models significantly diverged from primate behavior. Going forward, these high-resolution, large-scale primate behavioral benchmarks could serve as direct guides for discovering better ANN models of the primate visual system.},
                URL = {https://www.biorxiv.org/content/early/2018/02/12/240614},
                eprint = {https://www.biorxiv.org/content/early/2018/02/12/240614.full.pdf},
                journal = {bioRxiv}
            }
        @article{geirhos2021partial,
              title={Partial success in closing the gap between human and machine vision},
              author={Geirhos, Robert and Narayanappa, Kantharaju and Mitzkus, Benjamin and Thieringer, Tizian and Bethge, Matthias and Wichmann, Felix A and Brendel, Wieland},
              journal={Advances in Neural Information Processing Systems},
              volume={34},
              year={2021},
              url={https://openreview.net/forum?id=QkljT4mrfs}
        }
        @article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }
        @misc{ferguson_ngo_lee_dicarlo_schrimpf_2024,
         title={How Well is Visual Search Asymmetry predicted by a Binary-Choice, Rapid, Accuracy-based Visual-search, Oddball-detection (BRAVO) task?},
         url={osf.io/5ba3n},
         DOI={10.17605/OSF.IO/5BA3N},
         publisher={OSF},
         author={Ferguson, Michael E, Jr and Ngo, Jerry and Lee, Michael and DiCarlo, James and Schrimpf, Martin},
         year={2024},
         month={Jun}
}
        @article {Maniquet2024.04.02.587669,
	author = {Maniquet, Tim and de Beeck, Hans Op and Costantino, Andrea Ivan},
	title = {Recurrent issues with deep neural network models of visual recognition},
	elocation-id = {2024.04.02.587669},
	year = {2024},
	doi = {10.1101/2024.04.02.587669},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669},
	eprint = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669.full.pdf},
	journal = {bioRxiv}
}
        @INPROCEEDINGS{5206848,  
                                                author={J. {Deng} and W. {Dong} and R. {Socher} and L. {Li} and  {Kai Li} and  {Li Fei-Fei}},  
                                                booktitle={2009 IEEE Conference on Computer Vision and Pattern Recognition},   
                                                title={ImageNet: A large-scale hierarchical image database},   
                                                year={2009},  
                                                volume={},  
                                                number={},  
                                                pages={248-255},
                                            }
        @ARTICLE{Hendrycks2019-di,
   title         = "Benchmarking Neural Network Robustness to Common Corruptions
                    and Perturbations",
   author        = "Hendrycks, Dan and Dietterich, Thomas",
   abstract      = "In this paper we establish rigorous benchmarks for image
                    classifier robustness. Our first benchmark, ImageNet-C,
                    standardizes and expands the corruption robustness topic,
                    while showing which classifiers are preferable in
                    safety-critical applications. Then we propose a new dataset
                    called ImageNet-P which enables researchers to benchmark a
                    classifier's robustness to common perturbations. Unlike
                    recent robustness research, this benchmark evaluates
                    performance on common corruptions and perturbations not
                    worst-case adversarial perturbations. We find that there are
                    negligible changes in relative corruption robustness from
                    AlexNet classifiers to ResNet classifiers. Afterward we
                    discover ways to enhance corruption and perturbation
                    robustness. We even find that a bypassed adversarial defense
                    provides substantial common perturbation robustness.
                    Together our benchmarks may aid future work toward networks
                    that robustly generalize.",
   month         =  mar,
   year          =  2019,
   archivePrefix = "arXiv",
   primaryClass  = "cs.LG",
   eprint        = "1903.12261",
   url           = "https://arxiv.org/abs/1903.12261"
}
        @article{hermann2020origins,
              title={The origins and prevalence of texture bias in convolutional neural networks},
              author={Hermann, Katherine and Chen, Ting and Kornblith, Simon},
              journal={Advances in Neural Information Processing Systems},
              volume={33},
              pages={19000--19015},
              year={2020},
              url={https://proceedings.neurips.cc/paper/2020/hash/db5f9f42a7157abe65bb145000b5871a-Abstract.html}
        }
        

Layer Commitment

No layer commitments found for this model. Older submissions might not have stored this information but will be updated when evaluated on new benchmarks.

Visual Angle

None degrees