Rank

Model

average

V1: FreemanZiemba2013.V1-pls

V2: FreemanZiemba2013.V2-pls

V4: Majaj2015.V4-pls

IT: Majaj2015.IT-pls

IT-temporal: Kar2019-ost

behavior: Rajalingham2018-i2n

ImageNet: Deng2009-top1

1 CORnet-S
Kubilius et al., 2018
.402 .146 .208 .593 .533 .384 .545 .747
2 densenet-169
Huang et al., 2016
.365 .198 .288 .618 .542 X .543 .759
2 resnet-101_v1
He et al., 2015
.365 .207 .274 .600 .545 X .561 .764
4 resnet-50_v1
He et al., 2015
.364 .208 .279 .611 .558 X .526 .752
5 resnet-152_v1
He et al., 2015
.363 .211 .278 .607 .548 X .533 .768
5 densenet-201
Huang et al., 2016
.363 .206 .284 .604 .544 X .537 .772
5 resnet-50_v2
He et al., 2015
.363 .229 .283 .609 .504 X .553 .756
8 resnet-101_v2
He et al., 2015
.362 .217 .278 .615 .508 X .555 .774
9 mobilenet_v2_0.75_224
Howard et al., 2017
.360 .223 .260 .596 .531 X .547 .698
10 densenet-121
Huang et al., 2016
.359 .199 .271 .604 .544 X .535 .745
10 mobilenet_v1_1.0_224
Howard et al., 2017
.359 .205 .285 .594 .567 X .505 .709
12 resnet-152_v2
He et al., 2015
.358 .209 .277 .606 .516 X .540 .778
13 mobilenet_v1_0.75_192
Howard et al., 2017
.357 .218 .297 .574 .550 X .502 .672
14 mobilenet_v1_0.75_224
Howard et al., 2017
.355 .209 .273 .593 .553 X .502 .684
14 mobilenet_v1_1.0_192
Howard et al., 2017
.355 .216 .286 .588 .561 X .476 .700
14 mobilenet_v2_1.0_192
Howard et al., 2017
.355 .199 .276 .580 .547 X .526 .707
17 mobilenet_v2_1.4_224
Howard et al., 2017
.354 .208 .270 .598 .540 X .506 .750
17 mobilenet_v2_0.75_192
Howard et al., 2017
.354 .203 .274 .586 .537 X .526 .687
19 inception_resnet_v2
Szegedy et al., 2016
.352 .202 .270 .606 .534 X .502 .804
19 mobilenet_v2_1.0_224
Howard et al., 2017
.352 .208 .275 .582 .539 X .509 .718
21 inception_v3
Szegedy et al., 2015
.351 .196 .280 .614 .533 X .481 .780
21 mobilenet_v1_1.0_160
Howard et al., 2017
.351 .211 .289 .591 .538 X .475 .680
23 resnet-18
He et al., 2015
.350 .216 .266 .574 .519 X .524 .698
23 mobilenet_v2_1.0_160
Howard et al., 2017
.350 .199 .280 .577 .551 X .495 .688
23 inception_v1
Szegedy et al., 2014
.350 .194 .283 .618 .480 X .524 .698
23 mobilenet_v2_1.3_224
Howard et al., 2017
.350 .181 .269 .599 .542 X .511 .744
27 mobilenet_v2_1.0_128
Howard et al., 2017
.348 .216 .281 .578 .540 X .474 .653
27 mobilenet_v2_0.75_128
Howard et al., 2017
.348 .206 .283 .583 .547 X .470 .632
29 mobilenet_v2_0.75_160
Howard et al., 2017
.347 .201 .275 .574 .552 X .480 .664
29 inception_v2
Szegedy et al., 2015
.347 .186 .261 .591 .531 X .513 .739
31 mobilenet_v1_0.5_224
Howard et al., 2017
.346 .208 .275 .578 .540 X .476 .633
32 mobilenet_v1_1.0_128
Howard et al., 2017
.345 .200 .301 .569 .529 X .471 .652
32 mobilenet_v2_0.5_224
Howard et al., 2017
.345 .216 .256 .578 .531 X .488 .654
34 resnet-34
He et al., 2015
.344 .195 .266 .577 .479 X .546 .733
34 xception
Chollet, 2016
.344 .185 .258 .619 .494 X .505 .790
34 mobilenet_v1_0.5_192
Howard et al., 2017
.344 .211 .287 .559 .541 X .465 .617
37 mobilenet_v2_0.75_96
Howard et al., 2017
.342 .220 .286 .548 .541 X .459 .588
37 mobilenet_v2_0.5_192
Howard et al., 2017
.342 .203 .261 .575 .528 X .483 .639
37 vgg-19
Simonyan et al., 2014
.342 .183 .266 .619 .491 X .494 .711
37 vgg-16
Simonyan et al., 2014
.342 .177 .274 .627 .513 X .461 .715
37 mobilenet_v1_0.75_128
Howard et al., 2017
.342 .198 .289 .585 .534 X .446 .621
42 nasnet_large
Zoph et al., 2017
.341 .197 .229 .604 .529 X .489 .827
43 pnasnet_large
Liu et al., 2017
.340 .202 .234 .591 .513 X .499 .829
43 mobilenet_v2_0.5_160
Howard et al., 2017
.340 .212 .267 .570 .536 X .456 .610
45 mobilenet_v1_0.75_160
Howard et al., 2017
.339 .222 .280 .557 .528 X .447 .653
46 mobilenet_v1_0.5_160
Howard et al., 2017
.338 .216 .281 .574 .521 X .434 .591
47 mobilenet_v2_1.0_96
Howard et al., 2017
.336 .200 .298 .548 .532 X .437 .603
48 mobilenet_v2_0.5_128
Howard et al., 2017
.334 .194 .272 .577 .530 X .429 .577
49 mobilenet_v2_0.35_160
Howard et al., 2017
.333 .208 .276 .544 .527 X .440 .557
50 mobilenet_v2_0.35_192
Howard et al., 2017
.328 .213 .251 .562 .523 X .417 .582
50 inception_v4
Szegedy et al., 2016
.328 .158 .204 .576 .497 X .534 .802
50 bagnet33
Brendel et al., 2019
.328 .192 .233 .609 .471 X .462 .590
53 mobilenet_v2_0.35_224
Howard et al., 2017
.326 .184 .239 .565 .500 X .466 .603
54 mobilenet_v1_0.5_128
Howard et al., 2017
.325 .214 .283 .573 .502 X .380 .563
55 alexnet
Krizhevsky et al., 2012
.324 .213 .255 .582 .526 X .370 .577
56 nasnet_mobile
Zoph et al., 2017
.323 .194 .251 .591 .522 X .382 .740
57 mobilenet_v2_0.5_96
Howard et al., 2017
.315 .209 .279 .526 .506 X .373 .512
58 mobilenet_v1_0.25_224
Howard et al., 2017
.314 .217 .235 .562 .517 X .352 .498
59 mobilenet_v2_0.35_128
Howard et al., 2017
.311 .197 .266 .549 .491 X .364 .508
60 squeezenet1_1
Iandola et al., 2016
.309 .221 .279 .599 .460 X .291 .575
61 mobilenet_v1_0.25_192
Howard et al., 2017
.308 .208 .265 .519 .493 X .362 .477
62 bagnet17
Brendel et al., 2019
.307 .209 .240 .587 .450 X .359 .460
63 mobilenet_v1_0.25_128
Howard et al., 2017
.304 .214 .282 .549 .456 X .322 .415
63 mobilenet_v2_0.35_96
Howard et al., 2017
.304 .190 .276 .543 .473 X .344 .455
65 mobilenet_v1_0.25_160
Howard et al., 2017
.303 .201 .258 .545 .470 X .341 .455
66 squeezenet1_0
Iandola et al., 2016
.291 .192 .228 .609 .454 X .263 .575
67 CORnet-Z
Kubilius et al., 2018
.278 .142 .135 .589 .444 X .356 .470
68 bagnet9
Brendel et al., 2019
.277 .166 .218 .568 .404 X .307 .260
69 vggface
Parkhi et al., 2015
.232 .178 .225 .566 .346 X .078
Model scores on brain benchmarks. The more green and bright a cell, the better the model's score. Scores are ceiled, hover the benchmark to see ceilings.

About

The Brain-Score platform aims to yield strong computational models of the ventral stream. We enable researchers to quickly get a sense of how their model scores against standardized brain benchmarks on multiple dimensions and facilitate comparisons to other state-of-the-art models. At the same time, new brain data can quickly be tested against a wide range of models to determine how well existing models explain the data.

Brain-Score is organized by the DiCarlo lab @ MIT in collaboration with other labs worldwide. We are working towards making this into an easy-to-use platform where a model can easily be submitted to yield its scores on a range of brain benchmarks and new benchmarks can be incorporated to challenge the models.

This quantified approach lets us keep track of how close our models are to the brain on a range of experiments (data) using different evaluation techniques (metrics). For more details, please refer to the paper.

Compare

Deng2009-top1 vs Majaj2015.IT-pls.
Hover over dots to reveal model details. Scrolling zooms in and out. Click the dropdowns to change x-/y-axis data.

Participate

Challenge the data: Submit a model

Please get in touch with us to have us score your model (we are working on automating this step).

Challenge the models: Submit data

If you have neural or behavioral recordings that you would like models to compete on, please get in touch with us.

Change the evaluation: Submit a metric

If you have an idea for a different way of comparing brain and machine, please get in touch with us.

Citation

If you use Brain-Score in your work, please cite Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?
@article{SchrimpfKubilius2018BrainScore,
title={Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?},
author={Martin Schrimpf and Jonas Kubilius and Ha Hong and Najib J. Majaj and Rishi Rajalingham and Elias B. Issa and Kohitij Kar and Pouya Bashivan and Jonathan Prescott-Roy and Kailyn Schmidt and Daniel L. K. Yamins and James J. DiCarlo},
journal={bioRxiv preprint},
year={2018}
}
CORnet-S
Deng2009-top1-score: 0.747
Majaj2015.IT-pls-score: 0.533