Scores on benchmarks

Model rank shown below is with respect to all public models.
.08 average_vision rank 434
81 benchmarks
.08
0
ceiling
best
median
.01 neural_vision rank 482
38 benchmarks
.01
0
ceiling
best
median
.06 V1 rank 449
24 benchmarks
.06
0
ceiling
best
median
.18 Coggan2024_fMRI.V1-rdm v1 rank 31
.18
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.14 behavior_vision rank 225
43 benchmarks
.14
0
ceiling
best
median
.10 Geirhos2021-error_consistency [reference] rank 221
17 benchmarks
.10
0
ceiling
best
median
.18 Geirhos2021colour-error_consistency v1 [reference] rank 189
.18
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.08 Geirhos2021contrast-error_consistency v1 [reference] rank 234
.08
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.16 Geirhos2021cueconflict-error_consistency v1 [reference] rank 154
.16
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.08 Geirhos2021edge-error_consistency v1 [reference] rank 134
.08
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.08 Geirhos2021eidolonI-error_consistency v1 [reference] rank 283
.08
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.17 Geirhos2021eidolonII-error_consistency v1 [reference] rank 227
.17
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.12 Geirhos2021eidolonIII-error_consistency v1 [reference] rank 245
.12
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.22 Geirhos2021falsecolour-error_consistency v1 [reference] rank 164
.22
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.05 Geirhos2021highpass-error_consistency v1 [reference] rank 153
.05
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.09 Geirhos2021lowpass-error_consistency v1 [reference] rank 193
.09
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.05 Geirhos2021phasescrambling-error_consistency v1 [reference] rank 223
.05
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.01 Geirhos2021powerequalisation-error_consistency v1 [reference] rank 286
.01
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.06 Geirhos2021rotation-error_consistency v1 [reference] rank 206
.06
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.05 Geirhos2021silhouette-error_consistency v1 [reference] rank 298
.05
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.04 Geirhos2021sketch-error_consistency v1 [reference] rank 225
.04
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.16 Geirhos2021stylized-error_consistency v1 [reference] rank 185
.16
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.06 Geirhos2021uniformnoise-error_consistency v1 [reference] rank 188
.06
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.09 Baker2022 rank 157
3 benchmarks
.09
0
ceiling
best
median
.27 Baker2022fragmented-accuracy_delta v1 [reference] rank 118
.27
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.00 Baker2022frankenstein-accuracy_delta v1 [reference] rank 149
.00
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.00 Baker2022inverted-accuracy_delta v1 [reference] rank 58
.00
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.15 BMD2024 rank 124
4 benchmarks
.15
0
ceiling
best
median
.17 BMD2024.dotted_1Behavioral-accuracy_distance v1 rank 87
.17
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.13 BMD2024.dotted_2Behavioral-accuracy_distance v1 rank 113
.13
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.13 BMD2024.texture_1Behavioral-accuracy_distance v1 rank 133
.13
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.16 BMD2024.texture_2Behavioral-accuracy_distance v1 rank 123
.16
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.02 Ferguson2024 [reference] rank 255
14 benchmarks
.02
0
ceiling
best
median
.27 Ferguson2024gray_hard-value_delta v1 [reference] rank 180
.27
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.23 Hebart2023-match v1 rank 150
.23
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.42 Maniquet2024 rank 195
2 benchmarks
.42
0
ceiling
best
median
.16 Maniquet2024-confusion_similarity v1 [reference] rank 231
.16
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.67 Maniquet2024-tasks_consistency v1 [reference] rank 79
.67
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.10 Coggan2024_behavior-ConditionWiseAccuracySimilarity v1 rank 177
.10
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.10 engineering_vision rank 286
25 benchmarks
.10
0
ceiling
best
median
.52 Geirhos2021-top1 [reference] rank 154
17 benchmarks
.52
0
ceiling
best
median
.95 Geirhos2021colour-top1 v1 [reference] rank 145
.95
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.86 Geirhos2021contrast-top1 v1 [reference] rank 88
.86
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.16 Geirhos2021cueconflict-top1 v1 [reference] rank 242
.16
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.19 Geirhos2021edge-top1 v1 [reference] rank 214
.19
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.50 Geirhos2021eidolonI-top1 v1 [reference] rank 133
.50
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.51 Geirhos2021eidolonII-top1 v1 [reference] rank 141
.51
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.50 Geirhos2021eidolonIII-top1 v1 [reference] rank 155
.50
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.93 Geirhos2021falsecolour-top1 v1 [reference] rank 127
.93
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.50 Geirhos2021highpass-top1 v1 [reference] rank 64
.50
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.42 Geirhos2021lowpass-top1 v1 [reference] rank 129
.42
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.54 Geirhos2021phasescrambling-top1 v1 [reference] rank 188
.54
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.60 Geirhos2021powerequalisation-top1 v1 [reference] rank 185
.60
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.67 Geirhos2021rotation-top1 v1 [reference] rank 128
.67
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.32 Geirhos2021silhouette-top1 v1 [reference] rank 244
.32
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.49 Geirhos2021sketch-top1 v1 [reference] rank 216
.49
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.33 Geirhos2021stylized-top1 v1 [reference] rank 206
.33
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.38 Geirhos2021uniformnoise-top1 v1 [reference] rank 151
.38
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_model
model = load_model("resnet50_eMMCR_Vanilla")
model.start_task(...)
model.start_recording(...)
model.look_at(...)

Benchmarks bibtex

@inproceedings{santurkar2019computer,
    title={Computer Vision with a Single (Robust) Classifier},
    author={Shibani Santurkar and Dimitris Tsipras and Brandon Tran and Andrew Ilyas and Logan Engstrom and Aleksander Madry},
    booktitle={ArXiv preprint arXiv:1906.09453},
    year={2019}
}
        @article{geirhos2021partial,
              title={Partial success in closing the gap between human and machine vision},
              author={Geirhos, Robert and Narayanappa, Kantharaju and Mitzkus, Benjamin and Thieringer, Tizian and Bethge, Matthias and Wichmann, Felix A and Brendel, Wieland},
              journal={Advances in Neural Information Processing Systems},
              volume={34},
              year={2021},
              url={https://openreview.net/forum?id=QkljT4mrfs}
        }
        @article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }
        @misc{ferguson_ngo_lee_dicarlo_schrimpf_2024,
         title={How Well is Visual Search Asymmetry predicted by a Binary-Choice, Rapid, Accuracy-based Visual-search, Oddball-detection (BRAVO) task?},
         url={osf.io/5ba3n},
         DOI={10.17605/OSF.IO/5BA3N},
         publisher={OSF},
         author={Ferguson, Michael E, Jr and Ngo, Jerry and Lee, Michael and DiCarlo, James and Schrimpf, Martin},
         year={2024},
         month={Jun}
}
        @article {Maniquet2024.04.02.587669,
	author = {Maniquet, Tim and de Beeck, Hans Op and Costantino, Andrea Ivan},
	title = {Recurrent issues with deep neural network models of visual recognition},
	elocation-id = {2024.04.02.587669},
	year = {2024},
	doi = {10.1101/2024.04.02.587669},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669},
	eprint = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669.full.pdf},
	journal = {bioRxiv}
}
        

Layer Commitment

Region Layer
V1 layer3.1

Visual Angle

None degrees