Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("MajajHong2015.V4-pls")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.777
2
.750
3
.714
4
.698
5
.620
6
.614
7
.614
8
.611
9
.610
10
.610
11
.610
12
.610
13
.608
14
.607
15
.605
16
.605
17
.604
18
.604
19
.604
20
.603
21
.603
22
.602
23
.602
24
.602
25
.602
26
.602
27
.602
28
.602
29
.602
30
.602
31
.602
32
.602
33
.602
34
.602
35
.602
36
.601
37
.601
38
.601
39
.600
40
.600
41
.600
42
.600
43
.600
44
.599
45
.599
46
.599
47
.599
48
.599
49
.599
50
.598
51
.598
52
.598
53
.598
54
.598
55
.598
56
.597
57
.597
58
.597
59
.597
60
.596
61
.596
62
.596
63
.596
64
.596
65
.596
66
.596
67
.595
68
.595
69
.595
70
.594
71
.593
72
.592
73
.592
74
.592
75
.592
76
.592
77
.592
78
.592
79
.592
80
.592
81
.592
82
.591
83
.591
84
.591
85
.591
86
.591
87
.591
88
.591
89
.590
90
.590
91
.590
92
.590
93
.589
94
.589
95
.589
96
.589
97
.589
98
.589
99
.589
100
.589
101
.589
102
.589
103
.588
104
.588
105
.588
106
.588
107
.588
108
.588
109
.587
110
.587
111
.587
112
.587
113
.587
114
.586
115
.586
116
.586
117
.586
118
.586
119
.585
120
.585
121
.585
122
.585
123
.584
124
.584
125
.584
126
.584
127
.584
128
.584
129
.584
130
.584
131
.583
132
.583
133
.583
134
.583
135
.583
136
.583
137
.583
138
.582
139
.582
140
.582
141
.582
142
.582
143
.582
144
.582
145
.582
146
.582
147
.582
148
.581
149
.581
150
.581
151
.581
152
.581
153
.581
154
.581
155
.580
156
.580
157
.580
158
.580
159
.580
160
.580
161
.580
162
.580
163
.579
164
.579
165
.579
166
.579
167
.579
168
.579
169
.578
170
.578
171
.578
172
.578
173
.578
174
.578
175
.578
176
.578
177
.578
178
.577
179
.577
180
.577
181
.577
182
.577
183
.576
184
.576
185
.576
186
.575
187
.575
188
.575
189
.575
190
.575
191
.575
192
.575
193
.574
194
.574
195
.574
196
.574
197
.574
198
.574
199
.574
200
.574
201
.574
202
.574
203
.574
204
.573
205
.573
206
.573
207
.573
208
.573
209
.572
210
.572
211
.572
212
.571
213
.571
214
.571
215
.571
216
.570
217
.570
218
.570
219
.570
220
.570
221
.570
222
.570
223
.570
224
.570
225
.569
226
.569
227
.569
228
.569
229
.569
230
.569
231
.569
232
.569
233
.569
234
.569
235
.568
236
.568
237
.568
238
.568
239
.568
240
.568
241
.568
242
.567
243
.567
244
.567
245
.566
246
.566
247
.566
248
.566
249
.566
250
.566
251
.566
252
.566
253
.565
254
.565
255
.565
256
.564
257
.564
258
.564
259
.564
260
.563
261
.563
262
.563
263
.563
264
.562
265
.562
266
.562
267
.562
268
.562
269
.562
270
.561
271
.560
272
.560
273
.560
274
.560
275
.560
276
.560
277
.560
278
.559
279
.559
280
.558
281
.558
282
.558
283
.558
284
.558
285
.558
286
.558
287
.557
288
.557
289
.557
290
.557
291
.556
292
.556
293
.556
294
.555
295
.555
296
.555
297
.555
298
.555
299
.554
300
.554
301
.553
302
.553
303
.553
304
.551
305
.551
306
.551
307
.550
308
.550
309
.550
310
.550
311
.550
312
.550
313
.550
314
.550
315
.550
316
.550
317
.550
318
.550
319
.550
320
.550
321
.550
322
.549
323
.549
324
.549
325
.549
326
.549
327
.548
328
.548
329
.548
330
.548
331
.548
332
.548
333
.547
334
.547
335
.546
336
.545
337
.545
338
.544
339
.544
340
.543
341
.542
342
.541
343
.541
344
.541
345
.540
346
.539
347
.539
348
.538
349
.538
350
.537
351
.537
352
.536
353
.536
354
.536
355
.533
356
.533
357
.531
358
.530
359
.530
360
.527
361
.526
362
.524
363
.523
364
.521
365
.519
366
.518
367
.517
368
.517
369
.516
370
.516
371
.516
372
.515
373
.515
374
.514
375
.514
376
.514
377
.514
378
.514
379
.514
380
.514
381
.514
382
.514
383
.514
384
.514
385
.513
386
.511
387
.511
388
.509
389
.509
390
.504
391
.504
392
.503
393
.501
394
.501
395
.498
396
.498
397
.497
398
.494
399
.494
400
.491
401
.489
402
.487
403
.486
404
.485
405
.485
406
.483
407
.481
408
.476
409
.473
410
.469
411
.466
412
.456
413
.454
414
.453
415
.452
416
.451
417
.445
418
.443
419
.439
420
.438
421
.437
422
.436
423
.433
424
.433
425
.432
426
.432
427
.431
428
.431
429
.430
430
.430
431
.427
432
.421
433
.420
434
.419
435
.418
436
.376
437
.342
438
.339
439
.328
440
.316
441
.185
442
.179
443
.154
444
.098
445
.078
446
.073
447
.068
448
.068
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467

Benchmark bibtex

@article {Majaj13402,
            author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.},
            title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance},
            volume = {35},
            number = {39},
            pages = {13402--13418},
            year = {2015},
            doi = {10.1523/JNEUROSCI.5181-14.2015},
            publisher = {Society for Neuroscience},
            abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({	extquotedblleft}face patches{	extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.},
            issn = {0270-6474},
            URL = {https://www.jneurosci.org/content/35/39/13402},
            eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf},
            journal = {Journal of Neuroscience}}

Ceiling

0.90.

Note that scores are relative to this ceiling.

Data: MajajHong2015.V4

2560 stimuli recordings from 88 sites in V4

Metric: pls