Scores on benchmarks

Model rank shown below is with respect to all public models.
.072 average_vision rank 415
99 benchmarks
.072
0
ceiling
best
median
.006 neural_vision rank 499
56 benchmarks
.006
0
ceiling
best
median
.025 V1 rank 483
28 benchmarks
.025
0
ceiling
best
median
.178 Coggan2024_fMRI.V1-rdm v1 rank 38
.178
0
ceiling
best
median

24 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.138 behavior_vision rank 264
43 benchmarks
.138
0
ceiling
best
median
.098 Geirhos2021-error_consistency [reference] rank 239
17 benchmarks
.098
0
ceiling
best
median
.179 Geirhos2021colour-error_consistency v1 [reference] rank 212
.179
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.079 Geirhos2021contrast-error_consistency v1 [reference] rank 246
.079
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.160 Geirhos2021cueconflict-error_consistency v1 [reference] rank 183
.160
0
ceiling
best
median

1280 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.084 Geirhos2021edge-error_consistency v1 [reference] rank 150
.084
0
ceiling
best
median

160 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.077 Geirhos2021eidolonI-error_consistency v1 [reference] rank 296
.077
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.173 Geirhos2021eidolonII-error_consistency v1 [reference] rank 236
.173
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.123 Geirhos2021eidolonIII-error_consistency v1 [reference] rank 257
.123
0
ceiling
best
median

480 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.218 Geirhos2021falsecolour-error_consistency v1 [reference] rank 191
.218
0
ceiling
best
median

560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.048 Geirhos2021highpass-error_consistency v1 [reference] rank 197
.048
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.090 Geirhos2021lowpass-error_consistency v1 [reference] rank 221
.090
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.051 Geirhos2021phasescrambling-error_consistency v1 [reference] rank 237
.051
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.013 Geirhos2021powerequalisation-error_consistency v1 [reference] rank 302
.013
0
ceiling
best
median

560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.063 Geirhos2021rotation-error_consistency v1 [reference] rank 229
.063
0
ceiling
best
median

960 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.052 Geirhos2021silhouette-error_consistency v1 [reference] rank 309
.052
0
ceiling
best
median

160 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.044 Geirhos2021sketch-error_consistency v1 [reference] rank 246
.044
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.159 Geirhos2021stylized-error_consistency v1 [reference] rank 206
.159
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.061 Geirhos2021uniformnoise-error_consistency v1 [reference] rank 215
.061
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.091 Baker2022 rank 182
3 benchmarks
.091
0
ceiling
best
median
.274 Baker2022fragmented-accuracy_delta v1 [reference] rank 133
.274
0
ceiling
best
median

716 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.000 Baker2022frankenstein-accuracy_delta v1 [reference] rank 172
.000
0
ceiling
best
median

716 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.000 Baker2022inverted-accuracy_delta v1 [reference] rank 62
.000
0
ceiling
best
median

360 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.146 BMD2024 rank 145
4 benchmarks
.146
0
ceiling
best
median
.166 BMD2024.dotted_1Behavioral-accuracy_distance v1 rank 94
.166
0
ceiling
best
median

100 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.126 BMD2024.dotted_2Behavioral-accuracy_distance v1 rank 127
.126
0
ceiling
best
median

100 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.134 BMD2024.texture_1Behavioral-accuracy_distance v1 rank 151
.134
0
ceiling
best
median

100 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.157 BMD2024.texture_2Behavioral-accuracy_distance v1 rank 137
.157
0
ceiling
best
median

100 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.019 Ferguson2024 [reference] rank 283
14 benchmarks
.019
0
ceiling
best
median
.270 Ferguson2024gray_hard-value_delta v1 [reference] rank 197
.270
0
ceiling
best
median
2_way_afc task
48 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.233 Hebart2023-match v1 rank 171
.233
0
ceiling
best
median

1854 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.417 Maniquet2024 rank 224
2 benchmarks
.417
0
ceiling
best
median
.164 Maniquet2024-confusion_similarity v1 [reference] rank 257
.164
0
ceiling
best
median

13600 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.671 Maniquet2024-tasks_consistency v1 [reference] rank 102
.671
0
ceiling
best
median

13600 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.095 Coggan2024_behavior-ConditionWiseAccuracySimilarity v1 rank 200
.095
0
ceiling
best
median

22560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.104 engineering_vision rank 303
25 benchmarks
.104
0
ceiling
best
median
.520 Geirhos2021-top1 [reference] rank 179
17 benchmarks
.520
0
ceiling
best
median
.948 Geirhos2021colour-top1 v1 [reference] rank 174
.948
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.855 Geirhos2021contrast-top1 v1 [reference] rank 117
.855
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.158 Geirhos2021cueconflict-top1 v1 [reference] rank 260
.158
0
ceiling
best
median

1280 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.194 Geirhos2021edge-top1 v1 [reference] rank 227
.194
0
ceiling
best
median

160 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.498 Geirhos2021eidolonI-top1 v1 [reference] rank 157
.498
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.511 Geirhos2021eidolonII-top1 v1 [reference] rank 167
.511
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.498 Geirhos2021eidolonIII-top1 v1 [reference] rank 187
.498
0
ceiling
best
median

480 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.929 Geirhos2021falsecolour-top1 v1 [reference] rank 156
.929
0
ceiling
best
median

560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.500 Geirhos2021highpass-top1 v1 [reference] rank 80
.500
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.419 Geirhos2021lowpass-top1 v1 [reference] rank 158
.419
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.539 Geirhos2021phasescrambling-top1 v1 [reference] rank 215
.539
0
ceiling
best
median

640 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.598 Geirhos2021powerequalisation-top1 v1 [reference] rank 206
.598
0
ceiling
best
median

560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.674 Geirhos2021rotation-top1 v1 [reference] rank 148
.674
0
ceiling
best
median

960 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.319 Geirhos2021silhouette-top1 v1 [reference] rank 259
.319
0
ceiling
best
median

160 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.488 Geirhos2021sketch-top1 v1 [reference] rank 239
.488
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.329 Geirhos2021stylized-top1 v1 [reference] rank 233
.329
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.381 Geirhos2021uniformnoise-top1 v1 [reference] rank 177
.381
0
ceiling
best
median

800 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_model
model = load_model("resnet50_eMMCR_Vanilla")
model.start_task(...)
model.start_recording(...)
model.look_at(...)

Brain Encoding Response Generator (BERG)

Through the BERG you can easily generate neural responses to images of your choice using any Brain-Score vision model.

For more information on how to use BERG, see the documentation and tutorial.

Benchmarks bibtex

@inproceedings{santurkar2019computer,
    title={Computer Vision with a Single (Robust) Classifier},
    author={Shibani Santurkar and Dimitris Tsipras and Brandon Tran and Andrew Ilyas and Logan Engstrom and Aleksander Madry},
    booktitle={ArXiv preprint arXiv:1906.09453},
    year={2019}
}
        @article{geirhos2021partial,
              title={Partial success in closing the gap between human and machine vision},
              author={Geirhos, Robert and Narayanappa, Kantharaju and Mitzkus, Benjamin and Thieringer, Tizian and Bethge, Matthias and Wichmann, Felix A and Brendel, Wieland},
              journal={Advances in Neural Information Processing Systems},
              volume={34},
              year={2021},
              url={https://openreview.net/forum?id=QkljT4mrfs}
        }
        @article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }
        @misc{ferguson_ngo_lee_dicarlo_schrimpf_2024,
         title={How Well is Visual Search Asymmetry predicted by a Binary-Choice, Rapid, Accuracy-based Visual-search, Oddball-detection (BRAVO) task?},
         url={osf.io/5ba3n},
         DOI={10.17605/OSF.IO/5BA3N},
         publisher={OSF},
         author={Ferguson, Michael E, Jr and Ngo, Jerry and Lee, Michael and DiCarlo, James and Schrimpf, Martin},
         year={2024},
         month={Jun}
}
        @article {Maniquet2024.04.02.587669,
	author = {Maniquet, Tim and de Beeck, Hans Op and Costantino, Andrea Ivan},
	title = {Recurrent issues with deep neural network models of visual recognition},
	elocation-id = {2024.04.02.587669},
	year = {2024},
	doi = {10.1101/2024.04.02.587669},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669},
	eprint = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669.full.pdf},
	journal = {bioRxiv}
}
        

Layer Commitment

Region Layer
V1 layer3.1

Visual Angle

None degrees