Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("MajajHong2015.IT-pls")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.589
2
.579
3
.574
4
.574
5
.572
6
.569
7
.569
8
.568
9
.566
10
.566
11
.562
12
.561
13
.561
14
.561
15
.560
16
.560
17
.560
18
.559
19
.559
20
.559
21
.558
22
.556
23
.555
24
.555
25
.555
26
.554
27
.554
28
.554
29
.554
30
.554
31
.553
32
.553
33
.553
34
.552
35
.552
36
.552
37
.552
38
.552
39
.552
40
.551
41
.551
42
.550
43
.550
44
.550
45
.549
46
.549
47
.549
48
.549
49
.549
50
.549
51
.548
52
.548
53
.548
54
.548
55
.547
56
.547
57
.547
58
.547
59
.547
60
.547
61
.546
62
.546
63
.546
64
.546
65
.546
66
.545
67
.545
68
.545
69
.545
70
.545
71
.545
72
.545
73
.544
74
.544
75
.544
76
.544
77
.543
78
.543
79
.543
80
.543
81
.543
82
.543
83
.543
84
.543
85
.543
86
.542
87
.542
88
.542
89
.541
90
.541
91
.541
92
.541
93
.541
94
.541
95
.541
96
.541
97
.541
98
.541
99
.541
100
.541
101
.541
102
.540
103
.540
104
.540
105
.540
106
.540
107
.539
108
.539
109
.539
110
.539
111
.539
112
.538
113
.538
114
.536
115
.536
116
.536
117
.536
118
.535
119
.535
120
.535
121
.535
122
.535
123
.535
124
.535
125
.535
126
.534
127
.534
128
.534
129
.534
130
.534
131
.534
132
.533
133
.533
134
.533
135
.533
136
.533
137
.533
138
.532
139
.532
140
.532
141
.532
142
.532
143
.531
144
.531
145
.531
146
.531
147
.531
148
.531
149
.530
150
.530
151
.530
152
.530
153
.529
154
.529
155
.529
156
.528
157
.528
158
.528
159
.527
160
.527
161
.527
162
.527
163
.527
164
.527
165
.526
166
.526
167
.526
168
.525
169
.525
170
.524
171
.523
172
.523
173
.523
174
.522
175
.522
176
.521
177
.521
178
.520
179
.520
180
.520
181
.520
182
.520
183
.519
184
.519
185
.518
186
.518
187
.518
188
.518
189
.518
190
.517
191
.517
192
.517
193
.517
194
.517
195
.517
196
.517
197
.517
198
.516
199
.516
200
.516
201
.515
202
.515
203
.515
204
.515
205
.515
206
.515
207
.514
208
.513
209
.513
210
.513
211
.513
212
.512
213
.512
214
.512
215
.512
216
.512
217
.511
218
.510
219
.510
220
.510
221
.510
222
.509
223
.508
224
.508
225
.508
226
.508
227
.508
228
.508
229
.508
230
.508
231
.508
232
.508
233
.508
234
.508
235
.508
236
.508
237
.507
238
.507
239
.507
240
.507
241
.506
242
.506
243
.506
244
.506
245
.506
246
.506
247
.505
248
.505
249
.505
250
.504
251
.504
252
.504
253
.503
254
.503
255
.503
256
.502
257
.502
258
.501
259
.501
260
.501
261
.501
262
.501
263
.499
264
.499
265
.499
266
.499
267
.499
268
.498
269
.498
270
.496
271
.496
272
.496
273
.496
274
.495
275
.494
276
.494
277
.494
278
.494
279
.493
280
.493
281
.493
282
.493
283
.492
284
.490
285
.488
286
.488
287
.487
288
.487
289
.487
290
.486
291
.486
292
.485
293
.485
294
.485
295
.483
296
.483
297
.483
298
.482
299
.481
300
.480
301
.479
302
.479
303
.477
304
.477
305
.476
306
.475
307
.475
308
.474
309
.473
310
.472
311
.472
312
.472
313
.471
314
.471
315
.470
316
.470
317
.469
318
.469
319
.468
320
.467
321
.467
322
.466
323
.465
324
.464
325
.464
326
.462
327
.461
328
.461
329
.460
330
.459
331
.459
332
.458
333
.457
334
.457
335
.457
336
.456
337
.455
338
.455
339
.455
340
.455
341
.455
342
.455
343
.455
344
.455
345
.455
346
.455
347
.454
348
.454
349
.452
350
.451
351
.450
352
.449
353
.447
354
.447
355
.444
356
.443
357
.443
358
.442
359
.441
360
.439
361
.437
362
.437
363
.436
364
.436
365
.435
366
.434
367
.434
368
.432
369
.429
370
.425
371
.424
372
.423
373
.417
374
.414
375
.414
376
.414
377
.413
378
.411
379
.410
380
.408
381
.408
382
.406
383
.405
384
.404
385
.403
386
.401
387
.401
388
.400
389
.399
390
.395
391
.390
392
.388
393
.386
394
.386
395
.385
396
.382
397
.380
398
.380
399
.373
400
.372
401
.371
402
.366
403
.357
404
.351
405
.351
406
.350
407
.344
408
.339
409
.332
410
.325
411
.323
412
.320
413
.320
414
.315
415
.309
416
.307
417
.285
418
.283
419
.278
420
.277
421
.275
422
.269
423
.268
424
.263
425
.260
426
.257
427
.257
428
.254
429
.252
430
.251
431
.250
432
.250
433
.246
434
.245
435
.243
436
.240
437
.233
438
.220
439
.215
440
.215
441
.214
442
.212
443
.207
444
.206
445
.206
446
.192
447
.184
448
.174
449
.173
450
.165
451
.159
452
.142
453
.133
454
.132
455
.130
456
.122
457
.109
458
.039
459
.027
460
.026
461
.015
462
.015
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491

Benchmark bibtex

@article {Majaj13402,
            author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.},
            title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance},
            volume = {35},
            number = {39},
            pages = {13402--13418},
            year = {2015},
            doi = {10.1523/JNEUROSCI.5181-14.2015},
            publisher = {Society for Neuroscience},
            abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({	extquotedblleft}face patches{	extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.},
            issn = {0270-6474},
            URL = {https://www.jneurosci.org/content/35/39/13402},
            eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf},
            journal = {Journal of Neuroscience}}

Ceiling

0.82.

Note that scores are relative to this ceiling.

Data: MajajHong2015.IT

2560 stimuli recordings from 168 sites in IT

Metric: pls