Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("MajajHong2015.V4-pls")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.695
2
.671
3
.639
4
.625
5
.594
6
.586
7
.558
8
.555
9
.549
10
.549
11
.546
12
.546
13
.546
14
.546
15
.546
16
.544
17
.543
18
.542
19
.541
20
.541
21
.541
22
.540
23
.540
24
.539
25
.539
26
.539
27
.539
28
.539
29
.539
30
.539
31
.539
32
.539
33
.539
34
.539
35
.539
36
.539
37
.539
38
.538
39
.538
40
.538
41
.538
42
.537
43
.537
44
.537
45
.537
46
.537
47
.536
48
.536
49
.536
50
.536
51
.536
52
.536
53
.536
54
.535
55
.535
56
.535
57
.535
58
.535
59
.534
60
.534
61
.534
62
.534
63
.534
64
.534
65
.534
66
.533
67
.533
68
.533
69
.533
70
.532
71
.532
72
.532
73
.532
74
.530
75
.530
76
.530
77
.530
78
.530
79
.530
80
.530
81
.530
82
.530
83
.530
84
.529
85
.529
86
.529
87
.529
88
.529
89
.529
90
.529
91
.529
92
.528
93
.528
94
.528
95
.528
96
.528
97
.527
98
.527
99
.527
100
.527
101
.527
102
.527
103
.527
104
.527
105
.527
106
.527
107
.526
108
.526
109
.526
110
.526
111
.526
112
.526
113
.526
114
.526
115
.525
116
.525
117
.525
118
.525
119
.524
120
.524
121
.524
122
.524
123
.524
124
.523
125
.523
126
.523
127
.523
128
.523
129
.523
130
.523
131
.522
132
.522
133
.522
134
.522
135
.522
136
.522
137
.522
138
.522
139
.522
140
.521
141
.521
142
.521
143
.521
144
.521
145
.521
146
.521
147
.521
148
.521
149
.521
150
.521
151
.520
152
.520
153
.520
154
.520
155
.520
156
.520
157
.520
158
.519
159
.519
160
.519
161
.519
162
.519
163
.519
164
.519
165
.519
166
.518
167
.518
168
.518
169
.518
170
.518
171
.518
172
.518
173
.518
174
.518
175
.518
176
.518
177
.517
178
.517
179
.517
180
.517
181
.517
182
.516
183
.516
184
.516
185
.516
186
.516
187
.516
188
.515
189
.515
190
.515
191
.515
192
.515
193
.515
194
.514
195
.514
196
.514
197
.514
198
.514
199
.514
200
.514
201
.514
202
.514
203
.514
204
.513
205
.513
206
.513
207
.513
208
.513
209
.513
210
.513
211
.513
212
.513
213
.512
214
.512
215
.512
216
.512
217
.511
218
.511
219
.511
220
.511
221
.511
222
.510
223
.510
224
.510
225
.510
226
.510
227
.510
228
.510
229
.510
230
.510
231
.510
232
.510
233
.509
234
.509
235
.509
236
.509
237
.509
238
.509
239
.509
240
.509
241
.509
242
.509
243
.508
244
.508
245
.508
246
.508
247
.508
248
.508
249
.507
250
.507
251
.507
252
.507
253
.507
254
.507
255
.507
256
.507
257
.506
258
.506
259
.506
260
.505
261
.505
262
.505
263
.505
264
.504
265
.504
266
.504
267
.504
268
.504
269
.503
270
.503
271
.503
272
.503
273
.503
274
.503
275
.502
276
.502
277
.502
278
.501
279
.501
280
.501
281
.501
282
.501
283
.500
284
.500
285
.500
286
.500
287
.500
288
.500
289
.500
290
.500
291
.499
292
.499
293
.499
294
.498
295
.498
296
.498
297
.498
298
.497
299
.497
300
.497
301
.497
302
.497
303
.496
304
.496
305
.496
306
.495
307
.495
308
.495
309
.494
310
.493
311
.493
312
.492
313
.492
314
.492
315
.492
316
.492
317
.492
318
.492
319
.492
320
.492
321
.492
322
.492
323
.492
324
.492
325
.492
326
.492
327
.492
328
.492
329
.492
330
.492
331
.491
332
.491
333
.491
334
.491
335
.491
336
.491
337
.491
338
.491
339
.490
340
.490
341
.490
342
.489
343
.488
344
.488
345
.487
346
.487
347
.486
348
.485
349
.484
350
.484
351
.484
352
.483
353
.482
354
.482
355
.482
356
.482
357
.481
358
.481
359
.480
360
.480
361
.480
362
.479
363
.477
364
.477
365
.477
366
.476
367
.474
368
.474
369
.471
370
.471
371
.469
372
.468
373
.467
374
.464
375
.464
376
.463
377
.462
378
.462
379
.462
380
.462
381
.461
382
.461
383
.460
384
.460
385
.460
386
.460
387
.460
388
.460
389
.460
390
.460
391
.460
392
.460
393
.460
394
.459
395
.458
396
.457
397
.456
398
.455
399
.451
400
.451
401
.450
402
.449
403
.448
404
.446
405
.445
406
.444
407
.443
408
.442
409
.439
410
.437
411
.436
412
.436
413
.435
414
.434
415
.434
416
.433
417
.430
418
.426
419
.423
420
.422
421
.420
422
.417
423
.414
424
.408
425
.406
426
.405
427
.404
428
.403
429
.398
430
.397
431
.393
432
.392
433
.391
434
.391
435
.390
436
.388
437
.388
438
.387
439
.387
440
.386
441
.386
442
.386
443
.385
444
.385
445
.382
446
.377
447
.376
448
.375
449
.375
450
.366
451
.337
452
.336
453
.306
454
.303
455
.294
456
.283
457
.165
458
.161
459
.137
460
.088
461
.070
462
.065
463
.061
464
.060
465

Benchmark bibtex

@article {Majaj13402,
            author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.},
            title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance},
            volume = {35},
            number = {39},
            pages = {13402--13418},
            year = {2015},
            doi = {10.1523/JNEUROSCI.5181-14.2015},
            publisher = {Society for Neuroscience},
            abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({	extquotedblleft}face patches{	extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.},
            issn = {0270-6474},
            URL = {https://www.jneurosci.org/content/35/39/13402},
            eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf},
            journal = {Journal of Neuroscience}}

Ceiling

0.90.

Note that scores are relative to this ceiling.

Data: MajajHong2015.V4

2560 stimuli recordings from 88 sites in V4

Metric: pls