Scores on benchmarks

Model rank shown below is with respect to all public models.
.29 average_vision rank 134
81 benchmarks
.29
0
ceiling
best
median
.28 neural_vision rank 138
38 benchmarks
.28
0
ceiling
best
median
.31 V1 rank 279
24 benchmarks
.31
0
ceiling
best
median
.20 FreemanZiemba2013.V1-pls v3 [reference] rank 366
.20
0
ceiling
best
median
recordings from 102 sites in V1
315 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.67 Marques2020 [reference] rank 310
22 benchmarks
.67
0
ceiling
best
median
.80 V1-orientation rank 286
7 benchmarks
.80
0
ceiling
best
median
.96 Marques2020_DeValois1982-pref_or v1 rank 127
.96
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.65 Marques2020_Ringach2002-circular_variance v1 rank 339
.65
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.85 Marques2020_Ringach2002-cv_bandwidth_ratio v1 rank 183
.85
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.86 Marques2020_Ringach2002-opr_cv_diff v1 rank 208
.86
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.64 Marques2020_Ringach2002-or_bandwidth v1 rank 347
.64
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.99 Marques2020_Ringach2002-or_selective v1 rank 69
.99
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.62 Marques2020_Ringach2002-orth_pref_ratio v1 rank 329
.62
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.83 V1-receptive_field_size rank 42
2 benchmarks
.83
0
ceiling
best
median
.84 Marques2020_Cavanaugh2002-grating_summation_field v1 [reference] rank 64
.84
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.81 Marques2020_Cavanaugh2002-surround_diameter v1 [reference] rank 32
.81
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.70 V1-response_magnitude rank 342
3 benchmarks
.70
0
ceiling
best
median
.59 Marques2020_FreemanZiemba2013-max_noise v1 [reference] rank 342
.59
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.72 Marques2020_FreemanZiemba2013-max_texture v1 [reference] rank 311
.72
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.79 Marques2020_Ringach2002-max_dc v1 rank 363
.79
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.58 V1-response_selectivity rank 321
4 benchmarks
.58
0
ceiling
best
median
.57 Marques2020_FreemanZiemba2013-texture_selectivity v1 [reference] rank 343
.57
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.65 Marques2020_FreemanZiemba2013-texture_sparseness v1 [reference] rank 248
.65
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.90 Marques2020_FreemanZiemba2013-texture_variance_ratio v1 [reference] rank 48
.90
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.20 Marques2020_Ringach2002-modulation_ratio v1 rank 370
.20
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.72 V1-spatial_frequency rank 283
3 benchmarks
.72
0
ceiling
best
median
.84 Marques2020_DeValois1982-peak_sf v1 rank 45
.84
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.55 Marques2020_Schiller1976-sf_bandwidth v1 [reference] rank 353
.55
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.77 Marques2020_Schiller1976-sf_selective v1 [reference] rank 269
.77
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.68 V1-surround_modulation rank 148
1 benchmark
.68
0
ceiling
best
median
.68 Marques2020_Cavanaugh2002-surround_suppression_index v1 [reference] rank 148
.68
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.40 V1-texture_modulation rank 369
2 benchmarks
.40
0
ceiling
best
median
.24 Marques2020_FreemanZiemba2013-abs_texture_modulation_index v1 [reference] rank 374
.24
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.57 Marques2020_FreemanZiemba2013-texture_modulation_index v1 [reference] rank 347
.57
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.07 Coggan2024_fMRI.V1-rdm v1 rank 78
.07
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.16 V2 rank 106
2 benchmarks
.16
0
ceiling
best
median
.27 FreemanZiemba2013.V2-pls v3 [reference] rank 71
.27
0
ceiling
best
median
recordings from 103 sites in V2
315 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.05 Coggan2024_fMRI.V2-rdm v1 rank 140
.05
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.37 V4 rank 31
5 benchmarks
.37
0
ceiling
best
median
.52 MajajHong2015.V4-pls v4 [reference] rank 117
.52
0
ceiling
best
median
recordings from 88 sites in V4
2560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.55 Sanghavi2020.V4-pls v2 [reference] rank 181
.55
0
ceiling
best
median
recordings from 47 sites in V4
5760 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.47 SanghaviJozwik2020.V4-pls v2 [reference] rank 89
.47
0
ceiling
best
median
recordings from 50 sites in V4
4916 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.21 SanghaviMurty2020.V4-pls v2 [reference] rank 129
.21
0
ceiling
best
median
recordings from 46 sites in V4
300 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.08 Coggan2024_fMRI.V4-rdm v1 rank 51
.08
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.29 IT rank 141
7 benchmarks
.29
0
ceiling
best
median
.27 Bracci2019.anteriorVTC-rdm v1 rank 91
.27
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.44 MajajHong2015.IT-pls v4 [reference] rank 89
.44
0
ceiling
best
median
recordings from 168 sites in IT
2560 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.46 Sanghavi2020.IT-pls v2 [reference] rank 118
.46
0
ceiling
best
median
recordings from 88 sites in IT
5760 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.42 SanghaviJozwik2020.IT-pls v2 [reference] rank 185
.42
0
ceiling
best
median
recordings from 26 sites in IT
4916 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.34 SanghaviMurty2020.IT-pls v2 [reference] rank 79
.34
0
ceiling
best
median
recordings from 29 sites in IT
300 images
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.11 Coggan2024_fMRI.IT-rdm v1 rank 194
.11
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.30 behavior_vision rank 136
43 benchmarks
.30
0
ceiling
best
median
.18 Geirhos2021-error_consistency [reference] rank 151
17 benchmarks
.18
0
ceiling
best
median
.21 Geirhos2021colour-error_consistency v1 [reference] rank 184
.21
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.14 Geirhos2021contrast-error_consistency v1 [reference] rank 156
.14
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.17 Geirhos2021cueconflict-error_consistency v1 [reference] rank 146
.17
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.08 Geirhos2021edge-error_consistency v1 [reference] rank 134
.08
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.28 Geirhos2021eidolonI-error_consistency v1 [reference] rank 163
.28
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.27 Geirhos2021eidolonII-error_consistency v1 [reference] rank 164
.27
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.26 Geirhos2021eidolonIII-error_consistency v1 [reference] rank 166
.26
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.25 Geirhos2021falsecolour-error_consistency v1 [reference] rank 150
.25
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.05 Geirhos2021highpass-error_consistency v1 [reference] rank 153
.05
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.19 Geirhos2021lowpass-error_consistency v1 [reference] rank 103
.19
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.05 Geirhos2021phasescrambling-error_consistency v1 [reference] rank 223
.05
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.04 Geirhos2021powerequalisation-error_consistency v1 [reference] rank 220
.04
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.14 Geirhos2021rotation-error_consistency v1 [reference] rank 121
.14
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.44 Geirhos2021silhouette-error_consistency v1 [reference] rank 131
.44
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.11 Geirhos2021sketch-error_consistency v1 [reference] rank 108
.11
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.28 Geirhos2021stylized-error_consistency v1 [reference] rank 121
.28
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.12 Geirhos2021uniformnoise-error_consistency v1 [reference] rank 140
.12
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.26 Baker2022 rank 114
3 benchmarks
.26
0
ceiling
best
median
.29 Baker2022fragmented-accuracy_delta v1 [reference] rank 114
.29
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.49 Baker2022frankenstein-accuracy_delta v1 [reference] rank 78
.49
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.00 Baker2022inverted-accuracy_delta v1 [reference] rank 58
.00
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.22 BMD2024 rank 46
4 benchmarks
.22
0
ceiling
best
median
.15 BMD2024.dotted_1Behavioral-accuracy_distance v1 rank 101
.15
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.22 BMD2024.dotted_2Behavioral-accuracy_distance v1 rank 33
.22
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.28 BMD2024.texture_1Behavioral-accuracy_distance v1 rank 35
.28
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.22 BMD2024.texture_2Behavioral-accuracy_distance v1 rank 61
.22
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.35 Ferguson2024 [reference] rank 215
14 benchmarks
.35
0
ceiling
best
median
.39 Ferguson2024circle_line-value_delta v1 [reference] rank 95
.39
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.36 Ferguson2024color-value_delta v1 [reference] rank 177
.36
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.24 Ferguson2024convergence-value_delta v1 [reference] rank 166
.24
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.20 Ferguson2024eighth-value_delta v1 [reference] rank 99
.20
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.04 Ferguson2024gray_easy-value_delta v1 [reference] rank 240
.04
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.71 Ferguson2024gray_hard-value_delta v1 [reference] rank 69
.71
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.38 Ferguson2024half-value_delta v1 [reference] rank 157
.38
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.89 Ferguson2024llh-value_delta v1 [reference] rank 48
.89
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.84 Ferguson2024quarter-value_delta v1 [reference] rank 32
.84
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.07 Ferguson2024round_f-value_delta v1 [reference] rank 229
.07
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.27 Ferguson2024round_v-value_delta v1 [reference] rank 182
.27
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.54 Ferguson2024tilted_line-value_delta v1 [reference] rank 145
.54
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.33 Hebart2023-match v1 rank 75
.33
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.70 Maniquet2024 rank 56
2 benchmarks
.70
0
ceiling
best
median
.68 Maniquet2024-confusion_similarity v1 [reference] rank 68
.68
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.72 Maniquet2024-tasks_consistency v1 [reference] rank 22
.72
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.33 Coggan2024_behavior-ConditionWiseAccuracySimilarity v1 rank 89
.33
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.17 engineering_vision rank 246
25 benchmarks
.17
0
ceiling
best
median
.14 ImageNet-C-top1 [reference] rank 218
4 benchmarks
.14
0
ceiling
best
median
.21 ImageNet-C-blur-top1 v2 [reference] rank 162
.21
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.34 ImageNet-C-digital-top1 v2 [reference] rank 159
.34
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.51 Geirhos2021-top1 [reference] rank 166
17 benchmarks
.51
0
ceiling
best
median
.94 Geirhos2021colour-top1 v1 [reference] rank 157
.94
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.67 Geirhos2021contrast-top1 v1 [reference] rank 152
.67
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.20 Geirhos2021cueconflict-top1 v1 [reference] rank 146
.20
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.29 Geirhos2021edge-top1 v1 [reference] rank 109
.29
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.45 Geirhos2021eidolonI-top1 v1 [reference] rank 223
.45
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.48 Geirhos2021eidolonII-top1 v1 [reference] rank 177
.48
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.45 Geirhos2021eidolonIII-top1 v1 [reference] rank 210
.45
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.94 Geirhos2021falsecolour-top1 v1 [reference] rank 106
.94
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.37 Geirhos2021highpass-top1 v1 [reference] rank 139
.37
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.35 Geirhos2021lowpass-top1 v1 [reference] rank 189
.35
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.56 Geirhos2021phasescrambling-top1 v1 [reference] rank 173
.56
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.64 Geirhos2021powerequalisation-top1 v1 [reference] rank 156
.64
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.62 Geirhos2021rotation-top1 v1 [reference] rank 172
.62
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.39 Geirhos2021silhouette-top1 v1 [reference] rank 212
.39
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.58 Geirhos2021sketch-top1 v1 [reference] rank 161
.58
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.37 Geirhos2021stylized-top1 v1 [reference] rank 155
.37
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.32 Geirhos2021uniformnoise-top1 v1 [reference] rank 188
.32
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.20 Hermann2020 [reference] rank 183
2 benchmarks
.20
0
ceiling
best
median
.24 Hermann2020cueconflict-shape_bias v1 [reference] rank 186
.24
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
.16 Hermann2020cueconflict-shape_match v1 [reference] rank 143
.16
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_model
model = load_model("mobilenet_v2_1_0_224")
model.start_task(...)
model.start_recording(...)
model.look_at(...)

Benchmarks bibtex

@Article{Freeman2013,
                author={Freeman, Jeremy
                and Ziemba, Corey M.
                and Heeger, David J.
                and Simoncelli, Eero P.
                and Movshon, J. Anthony},
                title={A functional and perceptual signature of the second visual area in primates},
                journal={Nature Neuroscience},
                year={2013},
                month={Jul},
                day={01},
                volume={16},
                number={7},
                pages={974-981},
                abstract={The authors examined neuronal responses in V1 and V2 to synthetic texture stimuli that replicate higher-order statistical dependencies found in natural images. V2, but not V1, responded differentially to these textures, in both macaque (single neurons) and human (fMRI). Human detection of naturalistic structure in the same images was predicted by V2 responses, suggesting a role for V2 in representing natural image structure.},
                issn={1546-1726},
                doi={10.1038/nn.3402},
                url={https://doi.org/10.1038/nn.3402}
                }
        @article {Marques2021.03.01.433495,
	author = {Marques, Tiago and Schrimpf, Martin and DiCarlo, James J.},
	title = {Multi-scale hierarchical neural network models that bridge from single neurons in the primate primary visual cortex to object recognition behavior},
	elocation-id = {2021.03.01.433495},
	year = {2021},
	doi = {10.1101/2021.03.01.433495},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Primate visual object recognition relies on the representations in cortical areas at the top of the ventral stream that are computed by a complex, hierarchical network of neural populations. While recent work has created reasonably accurate image-computable hierarchical neural network models of those neural stages, those models do not yet bridge between the properties of individual neurons and the overall emergent behavior of the ventral stream. One reason we cannot yet do this is that individual artificial neurons in multi-stage models have not been shown to be functionally similar to individual biological neurons. Here, we took an important first step by building and evaluating hundreds of hierarchical neural network models in how well their artificial single neurons approximate macaque primary visual cortical (V1) neurons. We found that single neurons in certain models are surprisingly similar to their biological counterparts and that the distributions of single neuron properties, such as those related to orientation and spatial frequency tuning, approximately match those in macaque V1. Critically, we observed that hierarchical models with V1 stages that better match macaque V1 at the single neuron level are also more aligned with human object recognition behavior. Finally, we show that an optimized classical neuroscientific model of V1 is more functionally similar to primate V1 than all of the tested multi-stage models, suggesting room for further model improvements with tangible payoffs in closer alignment to human behavior. These results provide the first multi-stage, multi-scale models that allow our field to ask precisely how the specific properties of individual V1 neurons relate to recognition behavior.HighlightsImage-computable hierarchical neural network models can be naturally extended to create hierarchical {\textquotedblleft}brain models{\textquotedblright} that allow direct comparison with biological neural networks at multiple scales {\textendash} from single neurons, to population of neurons, to behavior.Single neurons in some of these hierarchical brain models are functionally similar to single neurons in macaque primate visual cortex (V1)Some hierarchical brain models have processing stages in which the entire distribution of artificial neuron properties closely matches the biological distributions of those same properties in macaque V1Hierarchical brain models whose V1 processing stages better match the macaque V1 stage also tend to be more aligned with human object recognition behavior at their output stageCompeting Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495},
	eprint = {https://www.biorxiv.org/content/early/2021/08/13/2021.03.01.433495.full.pdf},
	journal = {bioRxiv}
}
        @article{Cavanaugh2002,
            author = {Cavanaugh, James R. and Bair, Wyeth and Movshon, J. A.},
            doi = {10.1152/jn.00692.2001},
            isbn = {0022-3077 (Print) 0022-3077 (Linking)},
            issn = {0022-3077},
            journal = {Journal of Neurophysiology},
            mendeley-groups = {Benchmark effects/Done,Benchmark effects/*Surround Suppression},
            number = {5},
            pages = {2530--2546},
            pmid = {12424292},
            title = {{Nature and Interaction of Signals From the Receptive Field Center and Surround in Macaque V1 Neurons}},
            url = {http://www.physiology.org/doi/10.1152/jn.00692.2001},
            volume = {88},
            year = {2002}
            }
        @article{Freeman2013,
            author = {Freeman, Jeremy and Ziemba, Corey M. and Heeger, David J. and Simoncelli, E. P. and Movshon, J. A.},
            doi = {10.1038/nn.3402},
            issn = {10976256},
            journal = {Nature Neuroscience},
            number = {7},
            pages = {974--981},
            pmid = {23685719},
            publisher = {Nature Publishing Group},
            title = {{A functional and perceptual signature of the second visual area in primates}},
            url = {http://dx.doi.org/10.1038/nn.3402},
            volume = {16},
            year = {2013}
            }
        @article{Schiller1976,
            author = {Schiller, P. H. and Finlay, B. L. and Volman, S. F.},
            doi = {10.1152/jn.1976.39.6.1352},
            issn = {0022-3077},
            journal = {Journal of neurophysiology},
            number = {6},
            pages = {1334--1351},
            pmid = {825624},
            title = {{Quantitative studies of single-cell properties in monkey striate cortex. III. Spatial Frequency}},
            url = {http://www.ncbi.nlm.nih.gov/pubmed/825624},
            volume = {39},
            year = {1976}
            }
        @inproceedings{santurkar2019computer,
    title={Computer Vision with a Single (Robust) Classifier},
    author={Shibani Santurkar and Dimitris Tsipras and Brandon Tran and Andrew Ilyas and Logan Engstrom and Aleksander Madry},
    booktitle={ArXiv preprint arXiv:1906.09453},
    year={2019}
}
        @article {Majaj13402,
            author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.},
            title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance},
            volume = {35},
            number = {39},
            pages = {13402--13418},
            year = {2015},
            doi = {10.1523/JNEUROSCI.5181-14.2015},
            publisher = {Society for Neuroscience},
            abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({	extquotedblleft}face patches{	extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.},
            issn = {0270-6474},
            URL = {https://www.jneurosci.org/content/35/39/13402},
            eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf},
            journal = {Journal of Neuroscience}}
        @misc{Sanghavi_DiCarlo_2021,
  title={Sanghavi2020},
  url={osf.io/chwdk},
  DOI={10.17605/OSF.IO/CHWDK},
  publisher={OSF},
  author={Sanghavi, Sachi and DiCarlo, James J},
  year={2021},
  month={Nov}
}
        @misc{Sanghavi_Jozwik_DiCarlo_2021,
  title={SanghaviJozwik2020},
  url={osf.io/fhy36},
  DOI={10.17605/OSF.IO/FHY36},
  publisher={OSF},
  author={Sanghavi, Sachi and Jozwik, Kamila M and DiCarlo, James J},
  year={2021},
  month={Nov}
}
        @misc{Sanghavi_Murty_DiCarlo_2021,
  title={SanghaviMurty2020},
  url={osf.io/fchme},
  DOI={10.17605/OSF.IO/FCHME},
  publisher={OSF},
  author={Sanghavi, Sachi and Murty, N A R and DiCarlo, James J},
  year={2021},
  month={Nov}
}
        @Article{Kar2019,
                                                    author={Kar, Kohitij
                                                    and Kubilius, Jonas
                                                    and Schmidt, Kailyn
                                                    and Issa, Elias B.
                                                    and DiCarlo, James J.},
                                                    title={Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior},
                                                    journal={Nature Neuroscience},
                                                    year={2019},
                                                    month={Jun},
                                                    day={01},
                                                    volume={22},
                                                    number={6},
                                                    pages={974-983},
                                                    abstract={Non-recurrent deep convolutional neural networks (CNNs) are currently the best at modeling core object recognition, a behavior that is supported by the densely recurrent primate ventral stream, culminating in the inferior temporal (IT) cortex. If recurrence is critical to this behavior, then primates should outperform feedforward-only deep CNNs for images that require additional recurrent processing beyond the feedforward IT response. Here we first used behavioral methods to discover hundreds of these `challenge' images. Second, using large-scale electrophysiology, we observed that behaviorally sufficient object identity solutions emerged {	extasciitilde}30{	hinspace}ms later in the IT cortex for challenge images compared with primate performance-matched `control' images. Third, these behaviorally critical late-phase IT response patterns were poorly predicted by feedforward deep CNN activations. Notably, very-deep CNNs and shallower recurrent CNNs better predicted these late IT responses, suggesting that there is a functional equivalence between additional nonlinear transformations and recurrence. Beyond arguing that recurrent circuits are critical for rapid object identification, our results provide strong constraints for future recurrent model development.},
                                                    issn={1546-1726},
                                                    doi={10.1038/s41593-019-0392-5},
                                                    url={https://doi.org/10.1038/s41593-019-0392-5}
                                                    }
        @article{geirhos2021partial,
              title={Partial success in closing the gap between human and machine vision},
              author={Geirhos, Robert and Narayanappa, Kantharaju and Mitzkus, Benjamin and Thieringer, Tizian and Bethge, Matthias and Wichmann, Felix A and Brendel, Wieland},
              journal={Advances in Neural Information Processing Systems},
              volume={34},
              year={2021},
              url={https://openreview.net/forum?id=QkljT4mrfs}
        }
        @article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }
        @misc{ferguson_ngo_lee_dicarlo_schrimpf_2024,
         title={How Well is Visual Search Asymmetry predicted by a Binary-Choice, Rapid, Accuracy-based Visual-search, Oddball-detection (BRAVO) task?},
         url={osf.io/5ba3n},
         DOI={10.17605/OSF.IO/5BA3N},
         publisher={OSF},
         author={Ferguson, Michael E, Jr and Ngo, Jerry and Lee, Michael and DiCarlo, James and Schrimpf, Martin},
         year={2024},
         month={Jun}
}
        @article {Maniquet2024.04.02.587669,
	author = {Maniquet, Tim and de Beeck, Hans Op and Costantino, Andrea Ivan},
	title = {Recurrent issues with deep neural network models of visual recognition},
	elocation-id = {2024.04.02.587669},
	year = {2024},
	doi = {10.1101/2024.04.02.587669},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669},
	eprint = {https://www.biorxiv.org/content/early/2024/04/10/2024.04.02.587669.full.pdf},
	journal = {bioRxiv}
}
        @ARTICLE{Hendrycks2019-di,
   title         = "Benchmarking Neural Network Robustness to Common Corruptions
                    and Perturbations",
   author        = "Hendrycks, Dan and Dietterich, Thomas",
   abstract      = "In this paper we establish rigorous benchmarks for image
                    classifier robustness. Our first benchmark, ImageNet-C,
                    standardizes and expands the corruption robustness topic,
                    while showing which classifiers are preferable in
                    safety-critical applications. Then we propose a new dataset
                    called ImageNet-P which enables researchers to benchmark a
                    classifier's robustness to common perturbations. Unlike
                    recent robustness research, this benchmark evaluates
                    performance on common corruptions and perturbations not
                    worst-case adversarial perturbations. We find that there are
                    negligible changes in relative corruption robustness from
                    AlexNet classifiers to ResNet classifiers. Afterward we
                    discover ways to enhance corruption and perturbation
                    robustness. We even find that a bypassed adversarial defense
                    provides substantial common perturbation robustness.
                    Together our benchmarks may aid future work toward networks
                    that robustly generalize.",
   month         =  mar,
   year          =  2019,
   archivePrefix = "arXiv",
   primaryClass  = "cs.LG",
   eprint        = "1903.12261",
   url           = "https://arxiv.org/abs/1903.12261"
}
        @article{hermann2020origins,
              title={The origins and prevalence of texture bias in convolutional neural networks},
              author={Hermann, Katherine and Chen, Ting and Kornblith, Simon},
              journal={Advances in Neural Information Processing Systems},
              volume={33},
              pages={19000--19015},
              year={2020},
              url={https://proceedings.neurips.cc/paper/2020/hash/db5f9f42a7157abe65bb145000b5871a-Abstract.html}
        }
        

Layer Commitment

Region Layer
V1 mobilenet_v2.layer.7
V2 mobilenet_v2.layer.6
V4 mobilenet_v2.layer.6
IT mobilenet_v2.layer.12

Visual Angle

None degrees