Scores on benchmarks

Model rank shown below is with respect to all public models.
.210 average_vision rank 214
81 benchmarks
.210
0
ceiling
best
median
.026 neural_vision rank 464
38 benchmarks
.026
0
ceiling
best
median
.018 V1 rank 469
24 benchmarks
.018
0
ceiling
best
median
.053 Coggan2024_fMRI.V1-rdm v1 rank 124
.053
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.011 V2 rank 472
2 benchmarks
.011
0
ceiling
best
median
.022 Coggan2024_fMRI.V2-rdm v1 rank 199
.022
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.005 V4 rank 465
5 benchmarks
.005
0
ceiling
best
median
.027 Coggan2024_fMRI.V4-rdm v1 rank 122
.027
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.069 IT rank 421
7 benchmarks
.069
0
ceiling
best
median
.481 Coggan2024_fMRI.IT-rdm v1 rank 75
.481
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.394 behavior_vision rank 70
43 benchmarks
.394
0
ceiling
best
median
.491 Rajalingham2018-i2n v2 [reference] rank 188
.491
0
ceiling
best
median
match-to-sample task
240 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.462 Geirhos2021-error_consistency [reference] rank 40
17 benchmarks
.462
0
ceiling
best
median
.769 Geirhos2021colour-error_consistency v1 [reference] rank 16
.769
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.598 Geirhos2021contrast-error_consistency v1 [reference] rank 19
.598
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.332 Geirhos2021cueconflict-error_consistency v1 [reference] rank 47
.332
0
ceiling
best
median

1280 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.115 Geirhos2021edge-error_consistency v1 [reference] rank 78
.115
0
ceiling
best
median

160 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.505 Geirhos2021eidolonI-error_consistency v1 [reference] rank 62
.505
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.534 Geirhos2021eidolonII-error_consistency v1 [reference] rank 54
.534
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.490 Geirhos2021eidolonIII-error_consistency v1 [reference] rank 41
.490
0
ceiling
best
median

480 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.780 Geirhos2021falsecolour-error_consistency v1 [reference] rank 3
.780
0
ceiling
best
median

560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.149 Geirhos2021highpass-error_consistency v1 [reference] rank 52
.149
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.396 Geirhos2021lowpass-error_consistency v1 [reference] rank 50
.396
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.308 Geirhos2021phasescrambling-error_consistency v1 [reference] rank 55
.308
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.363 Geirhos2021powerequalisation-error_consistency v1 [reference] rank 47
.363
0
ceiling
best
median

560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.344 Geirhos2021rotation-error_consistency v1 [reference] rank 45
.344
0
ceiling
best
median

960 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.750 Geirhos2021silhouette-error_consistency v1 [reference] rank 47
.750
0
ceiling
best
median

160 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.221 Geirhos2021sketch-error_consistency v1 [reference] rank 51
.221
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.646 Geirhos2021stylized-error_consistency v1 [reference] rank 29
.646
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.555 Geirhos2021uniformnoise-error_consistency v1 [reference] rank 38
.555
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.361 Baker2022 rank 92
3 benchmarks
.361
0
ceiling
best
median
.111 Baker2022fragmented-accuracy_delta v1 [reference] rank 137
.111
0
ceiling
best
median

716 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.972 Baker2022frankenstein-accuracy_delta v1 [reference] rank 3
.972
0
ceiling
best
median

716 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.000 Baker2022inverted-accuracy_delta v1 [reference] rank 58
.000
0
ceiling
best
median

360 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.179 BMD2024 rank 92
4 benchmarks
.179
0
ceiling
best
median
.146 BMD2024.dotted_1Behavioral-accuracy_distance v1 rank 101
.146
0
ceiling
best
median

100 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.093 BMD2024.dotted_2Behavioral-accuracy_distance v1 rank 173
.093
0
ceiling
best
median

100 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.269 BMD2024.texture_1Behavioral-accuracy_distance v1 rank 39
.269
0
ceiling
best
median

100 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.210 BMD2024.texture_2Behavioral-accuracy_distance v1 rank 71
.210
0
ceiling
best
median

100 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.553 Ferguson2024 [reference] rank 42
14 benchmarks
.553
0
ceiling
best
median
1.0 Ferguson2024circle_line-value_delta v1 [reference] rank 1
1.0
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.794 Ferguson2024color-value_delta v1 [reference] rank 100
.794
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.261 Ferguson2024convergence-value_delta v1 [reference] rank 163
.261
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.486 Ferguson2024eighth-value_delta v1 [reference] rank 43
.486
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.882 Ferguson2024gray_easy-value_delta v1 [reference] rank 11
.882
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.883 Ferguson2024gray_hard-value_delta v1 [reference] rank 43
.883
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.484 Ferguson2024half-value_delta v1 [reference] rank 133
.484
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.730 Ferguson2024juncture-value_delta v1 [reference] rank 25
.730
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.276 Ferguson2024lle-value_delta v1 [reference] rank 169
.276
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.164 Ferguson2024llh-value_delta v1 [reference] rank 206
.164
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.047 Ferguson2024quarter-value_delta v1 [reference] rank 241
.047
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.312 Ferguson2024round_f-value_delta v1 [reference] rank 137
.312
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
1.0 Ferguson2024round_v-value_delta v1 [reference] rank 1
1.0
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.417 Ferguson2024tilted_line-value_delta v1 [reference] rank 180
.417
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.113 Hebart2023-match v1 rank 196
.113
0
ceiling
best
median

1854 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.613 Maniquet2024 rank 87
2 benchmarks
.613
0
ceiling
best
median
.513 Maniquet2024-confusion_similarity v1 [reference] rank 114
.513
0
ceiling
best
median

13600 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.713 Maniquet2024-tasks_consistency v1 [reference] rank 30
.713
0
ceiling
best
median

13600 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.383 Coggan2024_behavior-ConditionWiseAccuracySimilarity v1 rank 75
.383
0
ceiling
best
median

22560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.382 engineering_vision rank 106
25 benchmarks
.382
0
ceiling
best
median
.792 ImageNet-top1 v1 [reference] rank 27
.792
0
ceiling
best
median

50000 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.288 ImageNet-C-top1 [reference] rank 146
4 benchmarks
.288
0
ceiling
best
median
.532 ImageNet-C-noise-top1 v2 [reference] rank 33
.532
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.619 ImageNet-C-digital-top1 v2 [reference] rank 20
.619
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.618 Geirhos2021-top1 [reference] rank 62
17 benchmarks
.618
0
ceiling
best
median
.991 Geirhos2021colour-top1 v1 [reference] rank 25
.991
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.963 Geirhos2021contrast-top1 v1 [reference] rank 44
.963
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.225 Geirhos2021cueconflict-top1 v1 [reference] rank 103
.225
0
ceiling
best
median

1280 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.250 Geirhos2021edge-top1 v1 [reference] rank 154
.250
0
ceiling
best
median

160 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.454 Geirhos2021eidolonI-top1 v1 [reference] rank 215
.454
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.522 Geirhos2021eidolonII-top1 v1 [reference] rank 133
.522
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.504 Geirhos2021eidolonIII-top1 v1 [reference] rank 156
.504
0
ceiling
best
median

480 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.989 Geirhos2021falsecolour-top1 v1 [reference] rank 16
.989
0
ceiling
best
median

560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.522 Geirhos2021highpass-top1 v1 [reference] rank 63
.522
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.536 Geirhos2021lowpass-top1 v1 [reference] rank 36
.536
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.672 Geirhos2021phasescrambling-top1 v1 [reference] rank 57
.672
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.841 Geirhos2021powerequalisation-top1 v1 [reference] rank 46
.841
0
ceiling
best
median

560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.795 Geirhos2021rotation-top1 v1 [reference] rank 42
.795
0
ceiling
best
median

960 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.525 Geirhos2021silhouette-top1 v1 [reference] rank 99
.525
0
ceiling
best
median

160 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.673 Geirhos2021sketch-top1 v1 [reference] rank 60
.673
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.438 Geirhos2021stylized-top1 v1 [reference] rank 73
.438
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.616 Geirhos2021uniformnoise-top1 v1 [reference] rank 52
.616
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.212 Hermann2020 [reference] rank 170
2 benchmarks
.212
0
ceiling
best
median
.242 Hermann2020cueconflict-shape_bias v1 [reference] rank 190
.242
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.183 Hermann2020cueconflict-shape_match v1 [reference] rank 109
.183
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_model
model = load_model("nasnet_large")
model.start_task(...)
model.start_recording(...)
model.look_at(...)

Benchmarks bibtex

@article {Marques2021.03.01.433495,
	author = {Marques, Tiago and Schrimpf, Martin and DiCarlo, James J.},
	title = {Multi-scale hierarchical neural network models that bridge from single neurons in the primate primary visual cortex to object recognition behavior},
	elocation-id = {2021.03.01.433495},
	year = {2021},
	doi = {10.1101/2021.03.01.433495},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Primate visual object recognition relies on the representations in cortical areas at the top of the ventral stream that are computed by a complex, hierarchical network of neural populations. While recent work has created reasonably accurate image-computable hierarchical neural network models of those neural stages, those models do not yet bridge between the properties of individual neurons and the overall emergent behavior of the ventral stream. One reason we cannot yet do this is that individual artificial neurons in multi-stage models have not been shown to be functionally similar to individual biological neurons. Here, we took an important first step by building and evaluating hundreds of hierarchical neural network models in how well their artificial single neurons approximate macaque primary visual cortical (V1) neurons. We found that single neurons in certain models are surprisingly similar to their biological counterparts and that the distributions of single neuron properties, such as those related to orientation and spatial frequency tuning, approximately match those in macaque V1. Critically, we observed that hierarchical models with V1 stages that better match macaque V1 at the single neuron level are also more aligned with human object recognition behavior. Finally, we show that an optimized classical neuroscientific model of V1 is more functionally similar to primate V1 than all of the tested multi-stage models, suggesting room for further model improvements with tangible payoffs in closer alignment to human behavior. These results provide the first multi-stage, multi-scale models that allow our field to ask precisely how the specific properties of individual V1 neurons relate to recognition behavior.HighlightsImage-computable hierarchical neural network models can be naturally extended to create hierarchical {\textquotedblleft}brain models{\textquotedblright} that allow direct comparison with biological neural networks at multiple scales {\textendash} from single neurons, to population of neurons, to behavior.Single neurons in some of these hierarchical brain models are functionally similar to single neurons in macaque primate visual cortex (V1)Some hierarchical brain models have processing stages in which the entire distribution of artificial neuron properties closely matches the biological distributions of those same properties in macaque V1Hierarchical brain models whose V1 processing stages better match the macaque V1 stage also tend to be more aligned with human object recognition behavior at their output stageCompeting Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495},
	eprint = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495.full.pdf},
	journal = {bioRxiv}
}
        @article{Cavanaugh2002,
            author = {Cavanaugh, James R. and Bair, Wyeth and Movshon, J. A.},
            doi = {10.1152/jn.00692.2001},
            isbn = {0022-3077 (Print) 0022-3077 (Linking)},
            issn = {0022-3077},
            journal = {Journal of Neurophysiology},
            mendeley-groups = {Benchmark effects/Done,Benchmark effects/*Surround Suppression},
            number = {5},
            pages = {2530--2546},
            pmid = {12424292},
            title = {{Nature and Interaction of Signals From the Receptive Field Center and Surround in Macaque V1 Neurons}},
            url = {http://www.physiology.org/doi/10.1152/jn.00692.2001},
            volume = {88},
            year = {2002}
            }
        @article{Freeman2013,
            author = {Freeman, Jeremy and Ziemba, Corey M. and Heeger, David J. and Simoncelli, E. P. and Movshon, J. A.},
            doi = {10.1038/nn.3402},
            issn = {10976256},
            journal = {Nature Neuroscience},
            number = {7},
            pages = {974--981},
            pmid = {23685719},
            publisher = {Nature Publishing Group},
            title = {{A functional and perceptual signature of the second visual area in primates}},
            url = {http://dx.doi.org/10.1038/nn.3402},
            volume = {16},
            year = {2013}
            }
        @article{Schiller1976,
            author = {Schiller, P. H. and Finlay, B. L. and Volman, S. F.},
            doi = {10.1152/jn.1976.39.6.1352},
            issn = {0022-3077},
            journal = {Journal of neurophysiology},
            number = {6},
            pages = {1334--1351},
            pmid = {825624},
            title = {{Quantitative studies of single-cell properties in monkey striate cortex. III. Spatial Frequency}},
            url = {http://www.ncbi.nlm.nih.gov/pubmed/825624},
            volume = {39},
            year = {1976}
            }
        @inproceedings{santurkar2019computer,
    title={Computer Vision with a Single (Robust) Classifier},
    author={Shibani Santurkar and Dimitris Tsipras and Brandon Tran and Andrew Ilyas and Logan Engstrom and Aleksander Madry},
    booktitle={ArXiv preprint arXiv:1906.09453},
    year={2019}
}
        @Article{Kar2019,
                                                    author={Kar, Kohitij
                                                    and Kubilius, Jonas
                                                    and Schmidt, Kailyn
                                                    and Issa, Elias B.
                                                    and DiCarlo, James J.},
                                                    title={Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior},
                                                    journal={Nature Neuroscience},
                                                    year={2019},
                                                    month={Jun},
                                                    day={01},
                                                    volume={22},
                                                    number={6},
                                                    pages={974-983},
                                                    abstract={Non-recurrent deep convolutional neural networks (CNNs) are currently the best at modeling core object recognition, a behavior that is supported by the densely recurrent primate ventral stream, culminating in the inferior temporal (IT) cortex. If recurrence is critical to this behavior, then primates should outperform feedforward-only deep CNNs for images that require additional recurrent processing beyond the feedforward IT response. Here we first used behavioral methods to discover hundreds of these `challenge' images. Second, using large-scale electrophysiology, we observed that behaviorally sufficient object identity solutions emerged {	extasciitilde}30{	hinspace}ms later in the IT cortex for challenge images compared with primate performance-matched `control' images. Third, these behaviorally critical late-phase IT response patterns were poorly predicted by feedforward deep CNN activations. Notably, very-deep CNNs and shallower recurrent CNNs better predicted these late IT responses, suggesting that there is a functional equivalence between additional nonlinear transformations and recurrence. Beyond arguing that recurrent circuits are critical for rapid object identification, our results provide strong constraints for future recurrent model development.},
                                                    issn={1546-1726},
                                                    doi={10.1038/s41593-019-0392-5},
                                                    url={https://doi.org/10.1038/s41593-019-0392-5}
                                                    }
        @article {Rajalingham240614,
                author = {Rajalingham, Rishi and Issa, Elias B. and Bashivan, Pouya and Kar, Kohitij and Schmidt, Kailyn and DiCarlo, James J.},
                title = {Large-scale, high-resolution comparison of the core visual object recognition behavior of humans, monkeys, and state-of-the-art deep artificial neural networks},
                elocation-id = {240614},
                year = {2018},
                doi = {10.1101/240614},
                publisher = {Cold Spring Harbor Laboratory},
                abstract = {Primates{	extemdash}including humans{	extemdash}can typically recognize objects in visual images at a glance even in the face of naturally occurring identity-preserving image transformations (e.g. changes in viewpoint). A primary neuroscience goal is to uncover neuron-level mechanistic models that quantitatively explain this behavior by predicting primate performance for each and every image. Here, we applied this stringent behavioral prediction test to the leading mechanistic models of primate vision (specifically, deep, convolutional, artificial neural networks; ANNs) by directly comparing their behavioral signatures against those of humans and rhesus macaque monkeys. Using high-throughput data collection systems for human and monkey psychophysics, we collected over one million behavioral trials for 2400 images over 276 binary object discrimination tasks. Consistent with previous work, we observed that state-of-the-art deep, feed-forward convolutional ANNs trained for visual categorization (termed DCNNIC models) accurately predicted primate patterns of object-level confusion. However, when we examined behavioral performance for individual images within each object discrimination task, we found that all tested DCNNIC models were significantly non-predictive of primate performance, and that this prediction failure was not accounted for by simple image attributes, nor rescued by simple model modifications. These results show that current DCNNIC models cannot account for the image-level behavioral patterns of primates, and that new ANN models are needed to more precisely capture the neural mechanisms underlying primate object vision. To this end, large-scale, high-resolution primate behavioral benchmarks{	extemdash}such as those obtained here{	extemdash}could serve as direct guides for discovering such models.SIGNIFICANCE STATEMENT Recently, specific feed-forward deep convolutional artificial neural networks (ANNs) models have dramatically advanced our quantitative understanding of the neural mechanisms underlying primate core object recognition. In this work, we tested the limits of those ANNs by systematically comparing the behavioral responses of these models with the behavioral responses of humans and monkeys, at the resolution of individual images. Using these high-resolution metrics, we found that all tested ANN models significantly diverged from primate behavior. Going forward, these high-resolution, large-scale primate behavioral benchmarks could serve as direct guides for discovering better ANN models of the primate visual system.},
                URL = {https://www.biorxiv.org/content/early/2018/02/12/240614},
                eprint = {https://www.biorxiv.org/content/early/2018/02/12/240614.full.pdf},
                journal = {bioRxiv}
            }
        @article{geirhos2021partial,
              title={Partial success in closing the gap between human and machine vision},
              author={Geirhos, Robert and Narayanappa, Kantharaju and Mitzkus, Benjamin and Thieringer, Tizian and Bethge, Matthias and Wichmann, Felix A and Brendel, Wieland},
              journal={Advances in Neural Information Processing Systems},
              volume={34},
              year={2021},
              url={https://openreview.net/forum?id=QkljT4mrfs}
        }
        @article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }
        @misc{ferguson_ngo_lee_dicarlo_schrimpf_2024,
         title={How Well is Visual Search Asymmetry predicted by a Binary-Choice, Rapid, Accuracy-based Visual-search, Oddball-detection (BRAVO) task?},
         url={osf.io/5ba3n},
         DOI={10.17605/OSF.IO/5BA3N},
         publisher={OSF},
         author={Ferguson, Michael E, Jr and Ngo, Jerry and Lee, Michael and DiCarlo, James and Schrimpf, Martin},
         year={2024},
         month={Jun}
}
        @article {Maniquet2024.04.02.587669,
	author = {Maniquet, Tim and de Beeck, Hans Op and Costantino, Andrea Ivan},
	title = {Recurrent issues with deep neural network models of visual recognition},
	elocation-id = {2024.04.02.587669},
	year = {2024},
	doi = {10.1101/2024.04.02.587669},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669},
	eprint = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669.full.pdf},
	journal = {bioRxiv}
}
        @INPROCEEDINGS{5206848,  
                                                author={J. {Deng} and W. {Dong} and R. {Socher} and L. {Li} and  {Kai Li} and  {Li Fei-Fei}},  
                                                booktitle={2009 IEEE Conference on Computer Vision and Pattern Recognition},   
                                                title={ImageNet: A large-scale hierarchical image database},   
                                                year={2009},  
                                                volume={},  
                                                number={},  
                                                pages={248-255},
                                            }
        @ARTICLE{Hendrycks2019-di,
   title         = "Benchmarking Neural Network Robustness to Common Corruptions
                    and Perturbations",
   author        = "Hendrycks, Dan and Dietterich, Thomas",
   abstract      = "In this paper we establish rigorous benchmarks for image
                    classifier robustness. Our first benchmark, ImageNet-C,
                    standardizes and expands the corruption robustness topic,
                    while showing which classifiers are preferable in
                    safety-critical applications. Then we propose a new dataset
                    called ImageNet-P which enables researchers to benchmark a
                    classifier's robustness to common perturbations. Unlike
                    recent robustness research, this benchmark evaluates
                    performance on common corruptions and perturbations not
                    worst-case adversarial perturbations. We find that there are
                    negligible changes in relative corruption robustness from
                    AlexNet classifiers to ResNet classifiers. Afterward we
                    discover ways to enhance corruption and perturbation
                    robustness. We even find that a bypassed adversarial defense
                    provides substantial common perturbation robustness.
                    Together our benchmarks may aid future work toward networks
                    that robustly generalize.",
   month         =  mar,
   year          =  2019,
   archivePrefix = "arXiv",
   primaryClass  = "cs.LG",
   eprint        = "1903.12261",
   url           = "https://arxiv.org/abs/1903.12261"
}
        @article{hermann2020origins,
              title={The origins and prevalence of texture bias in convolutional neural networks},
              author={Hermann, Katherine and Chen, Ting and Kornblith, Simon},
              journal={Advances in Neural Information Processing Systems},
              volume={33},
              pages={19000--19015},
              year={2020},
              url={https://proceedings.neurips.cc/paper/2020/hash/db5f9f42a7157abe65bb145000b5871a-Abstract.html}
        }
        

Layer Commitment

Region Layer
V1 cell_1
V2 cell_2
V4 cell_1
IT cell_12

Visual Angle

None degrees