Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("MajajHong2015.V4-pls")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.620
2
.614
3
.614
4
.611
5
.610
6
.610
7
.610
8
.610
9
.608
10
.607
11
.605
12
.605
13
.604
14
.604
15
.604
16
.603
17
.603
18
.602
19
.602
20
.602
21
.602
22
.602
23
.602
24
.602
25
.602
26
.602
27
.602
28
.602
29
.602
30
.602
31
.602
32
.601
33
.601
34
.601
35
.600
36
.600
37
.600
38
.600
39
.600
40
.599
41
.599
42
.599
43
.599
44
.599
45
.599
46
.598
47
.598
48
.598
49
.598
50
.598
51
.597
52
.597
53
.597
54
.597
55
.596
56
.596
57
.596
58
.596
59
.596
60
.596
61
.596
62
.595
63
.595
64
.595
65
.594
66
.593
67
.592
68
.592
69
.592
70
.592
71
.592
72
.592
73
.591
74
.591
75
.591
76
.591
77
.591
78
.591
79
.590
80
.590
81
.589
82
.589
83
.589
84
.589
85
.589
86
.589
87
.589
88
.588
89
.588
90
.588
91
.588
92
.588
93
.587
94
.587
95
.587
96
.587
97
.587
98
.586
99
.586
100
.586
101
.586
102
.586
103
.585
104
.585
105
.585
106
.585
107
.584
108
.584
109
.584
110
.584
111
.584
112
.584
113
.584
114
.584
115
.583
116
.583
117
.583
118
.583
119
.583
120
.583
121
.583
122
.582
123
.582
124
.582
125
.582
126
.582
127
.582
128
.582
129
.582
130
.582
131
.582
132
.581
133
.581
134
.581
135
.581
136
.581
137
.581
138
.581
139
.580
140
.580
141
.580
142
.580
143
.580
144
.580
145
.580
146
.579
147
.579
148
.579
149
.579
150
.579
151
.579
152
.578
153
.578
154
.578
155
.578
156
.578
157
.578
158
.578
159
.578
160
.578
161
.577
162
.577
163
.577
164
.577
165
.577
166
.576
167
.576
168
.576
169
.575
170
.575
171
.575
172
.575
173
.575
174
.575
175
.575
176
.574
177
.574
178
.574
179
.574
180
.574
181
.574
182
.574
183
.574
184
.574
185
.574
186
.574
187
.573
188
.573
189
.573
190
.573
191
.573
192
.572
193
.572
194
.572
195
.571
196
.571
197
.571
198
.571
199
.570
200
.570
201
.570
202
.570
203
.570
204
.570
205
.570
206
.570
207
.570
208
.569
209
.569
210
.569
211
.569
212
.569
213
.569
214
.569
215
.569
216
.569
217
.569
218
.568
219
.568
220
.568
221
.568
222
.568
223
.568
224
.568
225
.567
226
.567
227
.567
228
.566
229
.566
230
.566
231
.566
232
.566
233
.566
234
.566
235
.566
236
.565
237
.565
238
.565
239
.564
240
.564
241
.564
242
.564
243
.563
244
.563
245
.563
246
.563
247
.562
248
.562
249
.562
250
.562
251
.562
252
.562
253
.561
254
.560
255
.560
256
.560
257
.560
258
.560
259
.560
260
.560
261
.559
262
.559
263
.558
264
.558
265
.558
266
.558
267
.558
268
.558
269
.558
270
.557
271
.557
272
.557
273
.557
274
.556
275
.556
276
.556
277
.555
278
.555
279
.555
280
.555
281
.555
282
.554
283
.554
284
.553
285
.553
286
.553
287
.551
288
.551
289
.551
290
.550
291
.550
292
.550
293
.550
294
.550
295
.550
296
.550
297
.550
298
.550
299
.550
300
.550
301
.550
302
.550
303
.550
304
.550
305
.549
306
.549
307
.549
308
.549
309
.548
310
.548
311
.548
312
.548
313
.548
314
.548
315
.547
316
.547
317
.546
318
.545
319
.545
320
.544
321
.544
322
.543
323
.542
324
.541
325
.541
326
.541
327
.540
328
.539
329
.539
330
.538
331
.538
332
.537
333
.536
334
.536
335
.536
336
.533
337
.533
338
.531
339
.530
340
.530
341
.527
342
.526
343
.524
344
.523
345
.521
346
.519
347
.518
348
.517
349
.517
350
.516
351
.516
352
.516
353
.515
354
.515
355
.514
356
.514
357
.514
358
.514
359
.514
360
.514
361
.514
362
.514
363
.514
364
.514
365
.514
366
.513
367
.511
368
.511
369
.509
370
.509
371
.504
372
.504
373
.503
374
.501
375
.501
376
.498
377
.498
378
.497
379
.494
380
.494
381
.491
382
.489
383
.487
384
.486
385
.485
386
.485
387
.483
388
.481
389
.476
390
.473
391
.469
392
.466
393
.456
394
.454
395
.453
396
.452
397
.451
398
.445
399
.443
400
.439
401
.438
402
.437
403
.436
404
.433
405
.433
406
.432
407
.432
408
.431
409
.431
410
.430
411
.430
412
.427
413
.421
414
.420
415
.419
416
.418
417
.376
418
.342
419
.339
420
.328
421
.316
422
.185
423
.179
424
.154
425
.098
426
.078
427
.073
428
.068
429
.068
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448

Benchmark bibtex

@article {Majaj13402,
            author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.},
            title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance},
            volume = {35},
            number = {39},
            pages = {13402--13418},
            year = {2015},
            doi = {10.1523/JNEUROSCI.5181-14.2015},
            publisher = {Society for Neuroscience},
            abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({	extquotedblleft}face patches{	extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.},
            issn = {0270-6474},
            URL = {https://www.jneurosci.org/content/35/39/13402},
            eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf},
            journal = {Journal of Neuroscience}}

Ceiling

0.90.

Note that scores are relative to this ceiling.

Data: MajajHong2015.V4

2560 stimuli recordings from 88 sites in V4

Metric: pls