Sample stimuli

sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_vision import load_benchmark
benchmark = load_benchmark("MajajHong2015.V4-pls")
score = benchmark(my_model)

Model scores

Min Alignment Max Alignment

Rank

Model

Score

1
.695
2
.671
3
.639
4
.625
5
.594
6
.586
7
.558
8
.555
9
.549
10
.549
11
.546
12
.546
13
.546
14
.546
15
.546
16
.544
17
.543
18
.542
19
.541
20
.541
21
.541
22
.540
23
.540
24
.539
25
.539
26
.539
27
.539
28
.539
29
.539
30
.539
31
.539
32
.539
33
.539
34
.539
35
.539
36
.539
37
.539
38
.538
39
.538
40
.538
41
.538
42
.537
43
.537
44
.537
45
.537
46
.537
47
.536
48
.536
49
.536
50
.536
51
.536
52
.536
53
.536
54
.535
55
.535
56
.535
57
.535
58
.535
59
.534
60
.534
61
.534
62
.534
63
.534
64
.534
65
.534
66
.533
67
.533
68
.533
69
.533
70
.532
71
.532
72
.532
73
.532
74
.530
75
.530
76
.530
77
.530
78
.530
79
.530
80
.530
81
.530
82
.530
83
.530
84
.529
85
.529
86
.529
87
.529
88
.529
89
.529
90
.529
91
.529
92
.528
93
.528
94
.528
95
.528
96
.528
97
.527
98
.527
99
.527
100
.527
101
.527
102
.527
103
.527
104
.527
105
.527
106
.527
107
.526
108
.526
109
.526
110
.526
111
.526
112
.526
113
.526
114
.526
115
.525
116
.525
117
.525
118
.525
119
.524
120
.524
121
.524
122
.524
123
.524
124
.523
125
.523
126
.523
127
.523
128
.523
129
.523
130
.523
131
.522
132
.522
133
.522
134
.522
135
.522
136
.522
137
.522
138
.522
139
.522
140
.521
141
.521
142
.521
143
.521
144
.521
145
.521
146
.521
147
.521
148
.521
149
.521
150
.521
151
.521
152
.520
153
.520
154
.520
155
.520
156
.520
157
.520
158
.520
159
.519
160
.519
161
.519
162
.519
163
.519
164
.519
165
.519
166
.519
167
.518
168
.518
169
.518
170
.518
171
.518
172
.518
173
.518
174
.518
175
.518
176
.518
177
.518
178
.517
179
.517
180
.517
181
.517
182
.517
183
.516
184
.516
185
.516
186
.516
187
.516
188
.516
189
.515
190
.515
191
.515
192
.515
193
.515
194
.514
195
.514
196
.514
197
.514
198
.514
199
.514
200
.514
201
.514
202
.514
203
.514
204
.513
205
.513
206
.513
207
.513
208
.513
209
.513
210
.513
211
.513
212
.513
213
.512
214
.512
215
.512
216
.512
217
.511
218
.511
219
.511
220
.511
221
.511
222
.511
223
.510
224
.510
225
.510
226
.510
227
.510
228
.510
229
.510
230
.510
231
.510
232
.510
233
.510
234
.509
235
.509
236
.509
237
.509
238
.509
239
.509
240
.509
241
.509
242
.509
243
.509
244
.508
245
.508
246
.508
247
.508
248
.508
249
.508
250
.507
251
.507
252
.507
253
.507
254
.507
255
.507
256
.507
257
.507
258
.506
259
.506
260
.506
261
.505
262
.505
263
.505
264
.505
265
.504
266
.504
267
.504
268
.504
269
.504
270
.503
271
.503
272
.503
273
.503
274
.503
275
.503
276
.502
277
.502
278
.502
279
.501
280
.501
281
.501
282
.501
283
.501
284
.500
285
.500
286
.500
287
.500
288
.500
289
.500
290
.500
291
.500
292
.499
293
.499
294
.499
295
.498
296
.498
297
.498
298
.498
299
.497
300
.497
301
.497
302
.497
303
.497
304
.496
305
.496
306
.496
307
.495
308
.495
309
.495
310
.494
311
.493
312
.493
313
.492
314
.492
315
.492
316
.492
317
.492
318
.492
319
.492
320
.492
321
.491
322
.491
323
.491
324
.491
325
.491
326
.491
327
.491
328
.491
329
.491
330
.490
331
.490
332
.490
333
.489
334
.488
335
.488
336
.487
337
.487
338
.486
339
.485
340
.484
341
.484
342
.484
343
.483
344
.482
345
.482
346
.482
347
.482
348
.481
349
.481
350
.480
351
.480
352
.480
353
.479
354
.477
355
.477
356
.477
357
.476
358
.474
359
.474
360
.471
361
.471
362
.469
363
.468
364
.467
365
.464
366
.464
367
.463
368
.462
369
.462
370
.462
371
.462
372
.461
373
.461
374
.460
375
.460
376
.460
377
.460
378
.460
379
.460
380
.460
381
.460
382
.460
383
.460
384
.460
385
.459
386
.458
387
.457
388
.456
389
.455
390
.451
391
.451
392
.450
393
.449
394
.448
395
.446
396
.445
397
.444
398
.443
399
.442
400
.439
401
.437
402
.436
403
.436
404
.435
405
.434
406
.434
407
.433
408
.430
409
.426
410
.423
411
.422
412
.420
413
.417
414
.414
415
.408
416
.406
417
.405
418
.404
419
.403
420
.398
421
.397
422
.393
423
.392
424
.391
425
.391
426
.390
427
.389
428
.388
429
.388
430
.387
431
.387
432
.386
433
.386
434
.386
435
.385
436
.385
437
.382
438
.377
439
.376
440
.375
441
.375
442
.366
443
.337
444
.336
445
.306
446
.303
447
.294
448
.283
449
.165
450
.161
451
.137
452
.088
453
.070
454
.065
455
.061
456
.060

Benchmark bibtex

@article {Majaj13402,
            author = {Majaj, Najib J. and Hong, Ha and Solomon, Ethan A. and DiCarlo, James J.},
            title = {Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance},
            volume = {35},
            number = {39},
            pages = {13402--13418},
            year = {2015},
            doi = {10.1523/JNEUROSCI.5181-14.2015},
            publisher = {Society for Neuroscience},
            abstract = {To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ({	extquotedblleft}face patches{	extquotedblright}) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of \~{}60,000 IT neurons and is executed as a simple weighted sum of those firing rates.SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of \>100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.},
            issn = {0270-6474},
            URL = {https://www.jneurosci.org/content/35/39/13402},
            eprint = {https://www.jneurosci.org/content/35/39/13402.full.pdf},
            journal = {Journal of Neuroscience}}

Ceiling

0.90.

Note that scores are relative to this ceiling.

Data: MajajHong2015.V4

2560 stimuli recordings from 88 sites in V4

Metric: pls