Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("Baker2022fragmented-accuracy_delta")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.986
2
.986
3
.984
4
.984
5
.983
6
.982
7
.982
8
.982
9
.981
10
.978
11
.970
12
.965
13
.960
14
.960
15
.960
16
.957
17
.946
18
.945
19
.944
20
.944
21
.935
22
.926
23
.925
24
.917
25
.903
26
.901
27
.901
28
.901
29
.889
30
.882
31
.868
32
.858
33
.858
34
.838
35
.836
36
.836
37
.834
38
.832
39
.822
40
.811
41
.806
42
.803
43
.802
44
.799
45
.796
46
.791
47
.788
48
.785
49
.760
50
.758
51
.756
52
.751
53
.740
54
.735
55
.734
56
.734
57
.730
58
.721
59
.720
60
.709
61
.698
62
.671
63
.670
64
.663
65
.656
66
.649
67
.646
68
.617
69
.603
70
.602
71
.592
72
.590
73
.583
74
.582
75
.566
76
.558
77
.558
78
.550
79
.543
80
.541
81
.538
82
.528
83
.524
84
.523
85
.515
86
.507
87
.499
88
.494
89
.478
90
.473
91
.470
92
.446
93
.445
94
.438
95
.433
96
.424
97
.421
98
.417
99
.412
100
.412
101
.412
102
.411
103
.400
104
.392
105
.392
106
.388
107
.365
108
.350
109
.336
110
.336
111
.333
112
.308
113
.304
114
.289
115
.287
116
.282
117
.280
118
.274
119
.268
120
.264
121
.251
122
.236
123
.221
124
.217
125
.216
126
.204
127
.195
128
.195
129
.186
130
.178
131
.167
132
.161
133
.149
134
.124
135
.115
136
.111
137
.096
138
.096
139
.053
140
.038
141
.032
142
.030
143
.029
144
.021
145
.015
146
.014
147
.011
148
.011
149
.003
150
.000
151
.000
152
.000
153
.000
154
.000
155
.000
156
.000
157
.000
158
.000
159
.000
160
.000
161
.000
162
.000
163
.000
164
.000
165
.000
166
.000
167
.000
168
.000
169
.000
170
.000
171
.000
172
.000
173
.000
174
.000
175
.000
176
.000
177
.000
178
.000
179
.000
180
.000
181
.000
182
.000
183
.000
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260

Benchmark bibtex

@article{BAKER2022104913,
                title = {Deep learning models fail to capture the configural nature of human shape perception},
                journal = {iScience},
                volume = {25},
                number = {9},
                pages = {104913},
                year = {2022},
                issn = {2589-0042},
                doi = {https://doi.org/10.1016/j.isci.2022.104913},
                url = {https://www.sciencedirect.com/science/article/pii/S2589004222011853},
                author = {Nicholas Baker and James H. Elder},
                keywords = {Biological sciences, Neuroscience, Sensory neuroscience},
                abstract = {Summary
                A hallmark of human object perception is sensitivity to the holistic configuration of the local shape features of an object. Deep convolutional neural networks (DCNNs) are currently the dominant models for object recognition processing in the visual cortex, but do they capture this configural sensitivity? To answer this question, we employed a dataset of animal silhouettes and created a variant of this dataset that disrupts the configuration of each object while preserving local features. While human performance was impacted by this manipulation, DCNN performance was not, indicating insensitivity to object configuration. Modifications to training and architecture to make networks more brain-like did not lead to configural processing, and none of the networks were able to accurately predict trial-by-trial human object judgements. We speculate that to match human configural sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.}
        }

Ceiling

Not available

Data: Baker2022fragmented

Metric: accuracy_delta