Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("Baker2022fragmented-accuracy_delta")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.986
2
.984
3
.984
4
.983
5
.982
6
.982
7
.982
8
.981
9
.978
10
.970
11
.965
12
.960
13
.960
14
.960
15
.957
16
.946
17
.945
18
.944
19
.944
20
.935
21
.926
22
.925
23
.917
24
.903
25
.901
26
.901
27
.901
28
.889
29
.882
30
.868
31
.858
32
.858
33
.838
34
.836
35
.836
36
.834
37
.832
38
.822
39
.806
40
.803
41
.802
42
.799
43
.796
44
.791
45
.788
46
.785
47
.760
48
.758
49
.756
50
.751
51
.740
52
.735
53
.734
54
.734
55
.730
56
.721
57
.720
58
.709
59
.698
60
.671
61
.670
62
.663
63
.656
64
.649
65
.646
66
.617
67
.603
68
.602
69
.592
70
.590
71
.583
72
.582
73
.566
74
.558
75
.558
76
.550
77
.543
78
.541
79
.538
80
.528
81
.524
82
.523
83
.515
84
.507
85
.499
86
.494
87
.478
88
.473
89
.470
90
.446
91
.438
92
.433
93
.424
94
.421
95
.417
96
.412
97
.412
98
.412
99
.411
100
.400
101
.392
102
.392
103
.388
104
.365
105
.350
106
.336
107
.336
108
.333
109
.308
110
.304
111
.289
112
.287
113
.282
114
.280
115
.274
116
.268
117
.264
118
.251
119
.236
120
.221
121
.217
122
.216
123
.204
124
.195
125
.195
126
.186
127
.178
128
.167
129
.161
130
.149
131
.115
132
.111
133
.096
134
.053
135
.038
136
.032
137
.030
138
.029
139
.021
140
.015
141
.014
142
.011
143
.011
144
.003
145
.000
146
.000
147
.000
148
.000
149
.000
150
.000
151
.000
152
.000
153
.000
154
.000
155
.000
156
.000
157
.000
158
.000
159
.000
160
.000
161
.000
162
.000
163
.000
164
.000
165
.000
166
.000
167
.000
168
.000
169
.000
170
.000
171
.000
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227

Benchmark bibtex

@article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }

Ceiling

Not available

Data: Baker2022fragmented

Metric: accuracy_delta