Rank

Model

average

V1: FreemanZiemba2013.V1-pls

V2: FreemanZiemba2013.V2-pls

V4: Majaj2015.V4-pls

IT: Majaj2015.IT-pls

IT-temporal: Kar2019-ost

behavior: Rajalingham2018-i2n

ImageNet: Deng2009-top1

1 CORnet-S
Kubilius et al., 2018
.397 .184 .191 .611 .533 .316 .545 .747
2 CORnet-R
Kubilius et al., 2018
.378 .125 .133 .531 .513 .356 .416 .560
3 CORnet-R2
Kubilius et al., 2018
.374 .189 .204 .428 .540 .377 .508 .700
4 resnet-101_v1
He et al., 2015
.365 .207 .274 .600 .545 X .561 .764
4 densenet-169
Huang et al., 2016
.365 .198 .288 .618 .542 X .543 .759
6 resnet-50_v1
He et al., 2015
.364 .208 .279 .611 .558 X .526 .752
7 resnet-50_v2
He et al., 2015
.363 .229 .283 .609 .504 X .553 .756
7 resnet-152_v1
He et al., 2015
.363 .211 .278 .607 .548 X .533 .768
7 densenet-201
Huang et al., 2016
.363 .206 .284 .604 .544 X .537 .772
10 resnet-101_v2
He et al., 2015
.362 .217 .278 .615 .508 X .555 .774
11 mobilenet_v1_1.0_224
Howard et al., 2017
.359 .205 .285 .594 .567 X .505 .709
11 densenet-121
Huang et al., 2016
.359 .199 .271 .604 .544 X .535 .745
13 resnet-152_v2
He et al., 2015
.358 .209 .277 .606 .516 X .540 .778
14 mobilenet_v1_0.75_192
Howard et al., 2017
.357 .218 .297 .574 .550 X .502 .672
15 mobilenet_v1_0.75_224
Howard et al., 2017
.355 .209 .273 .593 .553 X .502 .684
15 mobilenet_v1_1.0_192
Howard et al., 2017
.355 .216 .286 .588 .561 X .476 .700
17 mobilenet_v1_1.0_160
Howard et al., 2017
.351 .211 .289 .591 .538 X .475 .680
17 inception_v3
Szegedy et al., 2015
.351 .196 .280 .614 .533 X .481 .780
19 resnet-18
He et al., 2015
.350 .216 .266 .574 .519 X .524 .698
20 mobilenet_v1_0.5_224
Howard et al., 2017
.346 .208 .275 .578 .540 X .476 .633
21 mobilenet_v2_0.5_224
Howard et al., 2017
.345 .216 .256 .578 .531 X .488 .654
21 mobilenet_v1_1.0_128
Howard et al., 2017
.345 .200 .301 .569 .529 X .471 .652
23 resnet-34
He et al., 2015
.344 .195 .266 .577 .479 X .546 .733
23 mobilenet_v1_0.5_192
Howard et al., 2017
.344 .211 .287 .559 .541 X .465 .617
23 xception
Chollet, 2016
.344 .185 .258 .619 .494 X .505 .790
26 vgg-16
Simonyan et al., 2014
.342 .177 .274 .627 .513 X .461 .715
26 mobilenet_v2_0.5_192
Howard et al., 2017
.342 .203 .261 .575 .528 X .483 .639
26 vgg-19
Simonyan et al., 2014
.342 .183 .266 .619 .491 X .494 .711
26 mobilenet_v1_0.75_128
Howard et al., 2017
.342 .198 .289 .585 .534 X .446 .621
30 nasnet_large
Zoph et al., 2017
.341 .197 .229 .604 .529 X .489 .827
31 pnasnet_large
Liu et al., 2017
.340 .202 .234 .591 .513 X .499 .829
31 mobilenet_v2_0.5_160
Howard et al., 2017
.340 .212 .267 .570 .536 X .456 .610
33 mobilenet_v1_0.75_160
Howard et al., 2017
.339 .222 .280 .557 .528 X .447 .653
34 mobilenet_v1_0.5_160
Howard et al., 2017
.337 .216 .277 .574 .521 X .434 .591
35 mobilenet_v2_0.5_128
Howard et al., 2017
.334 .196 .272 .577 .530 X .429 .577
36 mobilenet_v2_0.35_160
Howard et al., 2017
.333 .208 .276 .544 .527 X .440 .557
37 mobilenet_v2_0.35_224
Howard et al., 2017
.329 .197 .239 .570 .500 X .466 .603
38 bagnet33
Brendel et al., 2019
.328 .192 .233 .609 .471 X .462 .590
38 inception_v4
Szegedy et al., 2016
.328 .158 .204 .576 .497 X .534 .802
38 mobilenet_v2_0.35_192
Howard et al., 2017
.328 .213 .251 .562 .523 X .417 .582
41 alexnet
Krizhevsky et al., 2012
.324 .213 .255 .582 .526 X .370 .577
42 nasnet_mobile
Zoph et al., 2017
.323 .194 .251 .591 .522 X .382 .740
43 mobilenet_v2_0.75_224
Howard et al., 2017
.317 .210 .275 .592 .276 X .547 .698
44 mobilenet_v2_1.0_224
Howard et al., 2017
.316 .196 .278 .582 .328 X .509 .718
44 mobilenet_v2_1.0_192
Howard et al., 2017
.316 .199 .276 .580 .315 X .526 .707
46 mobilenet_v2_0.5_96
Howard et al., 2017
.315 .209 .279 .526 .506 X .373 .512
47 mobilenet_v2_1.0_160
Howard et al., 2017
.314 .199 .281 .550 .360 X .495 .688
47 mobilenet_v1_0.25_224
Howard et al., 2017
.314 .217 .235 .562 .517 X .352 .498
49 mobilenet_v2_0.75_192
Howard et al., 2017
.313 .203 .274 .572 .301 X .526 .687
50 mobilenet_v2_1.4_224
Howard et al., 2017
.312 .198 .273 .555 .342 X .506 .750
51 mobilenet_v2_0.35_128
Howard et al., 2017
.311 .197 .266 .549 .491 X .364 .508
52 squeezenet1_1
Iandola et al., 2016
.309 .221 .279 .599 .460 X .291 .575
52 mobilenet_v2_1.3_224
Howard et al., 2017
.309 .218 .269 .528 .327 X .511 .744
54 bagnet17
Brendel et al., 2019
.307 .209 .240 .587 .450 X .359 .460
55 inception_v2
Szegedy et al., 2015
.304 .193 .268 .490 .359 X .513 .739
55 mobilenet_v2_0.35_96
Howard et al., 2017
.304 .190 .276 .543 .473 X .344 .455
57 mobilenet_v2_0.75_160
Howard et al., 2017
.303 .195 .254 .573 .317 X .480 .664
57 mobilenet_v2_1.0_128
Howard et al., 2017
.303 .216 .281 .577 .270 X .474 .653
57 inception_v1
Szegedy et al., 2014
.303 .189 .283 .505 .316 X .524 .698
60 mobilenet_v2_0.75_128
Howard et al., 2017
.301 .215 .283 .583 .251 X .470 .632
61 CORnet-Z
Kubilius et al., 2018
.291 .142 .219 .589 .444 X .356 .470
61 squeezenet1_0
Iandola et al., 2016
.291 .192 .228 .609 .454 X .263 .575
63 mobilenet_v2_1.0_96
Howard et al., 2017
.290 .200 .298 .548 .257 X .437 .603
64 mobilenet_v2_0.75_96
Howard et al., 2017
.285 .198 .286 .548 .218 X .459 .588
65 mobilenet_v1_0.5_128
Howard et al., 2017
.284 .214 .292 .552 .270 X .380 .563
66 inception_resnet_v2
Szegedy et al., 2016
.281 .178 .215 .563 .227 X .502 .804
67 bagnet9
Brendel et al., 2019
.277 .166 .218 .568 .404 X .307 .260
68 mobilenet_v1_0.25_192
Howard et al., 2017
.271 .203 .266 .554 .244 X .362 .477
69 mobilenet_v1_0.25_160
Howard et al., 2017
.261 .202 .258 .490 .274 X .341 .455
70 mobilenet_v1_0.25_128
Howard et al., 2017
.254 .214 .275 .521 .193 X .322 .415
71 vggface
Parkhi et al., 2015
.232 .178 .225 .566 .346 X .078
Model scores on brain benchmarks. The more green and bright a cell, the better the model's score. Scores are ceiled, hover the benchmark to see ceilings.

About

The Brain-Score platform aims to yield strong computational models of the ventral stream. We enable researchers to quickly get a sense of how their model scores against standardized brain benchmarks on multiple dimensions and facilitate comparisons to other state-of-the-art models. At the same time, new brain data can quickly be tested against a wide range of models to determine how well existing models explain the data.

Brain-Score is organized by the DiCarlo lab @ MIT in collaboration with other labs worldwide. We are working towards making this into an easy-to-use platform where a model can easily be submitted to yield its scores on a range of brain benchmarks and new benchmarks can be incorporated to challenge the models.

This quantified approach lets us keep track of how close our models are to the brain on a range of experiments (data) using different evaluation techniques (metrics). For more details, please refer to the paper.

Compare

Deng2009-top1 vs average.
Hover over dots to reveal model details. Scrolling zooms in and out. Click the dropdowns to change x-/y-axis data.

Participate

Challenge the data: Submit a model

Please get in touch with us to have us score your model (we are working on automating this step).

Challenge the models: Submit data

If you have neural or behavioral recordings that you would like models to compete on, please get in touch with us.

Change the evaluation: Submit a metric

If you have an idea for a different way of comparing brain and machine, please get in touch with us.

Citation

If you use Brain-Score in your work, please cite Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?
@article{SchrimpfKubilius2018BrainScore,
title={Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?},
author={Martin Schrimpf and Jonas Kubilius and Ha Hong and Najib J. Majaj and Rishi Rajalingham and Elias B. Issa and Kohitij Kar and Pouya Bashivan and Jonathan Prescott-Roy and Kailyn Schmidt and Daniel L. K. Yamins and James J. DiCarlo},
journal={bioRxiv preprint},
year={2018}
}
inception_v2
Deng2009-top1-score: 0.739
average-score: 0.304