Rank

Model

average

V1: FreemanZiemba2013.V1-pls

V2: FreemanZiemba2013.V2-pls

V4: Majaj2015.V4-pls

IT: Majaj2015.IT-pls

IT-temporal: Kar2019-ost

behavior: Rajalingham2018-i2n

ImageNet: Deng2009-top1

68 mobilenet_v1_0.25_128
Howard et al., 2017
.254 .214 .275 .521 .193 nan .322 .415
44 mobilenet_v2_0.5_96
Howard et al., 2017
.315 .209 .279 .526 .506 nan .373 .512
53 inception_v2
Szegedy et al., 2015
.304 .193 .268 .490 .359 nan .513 .739
34 mobilenet_v2_0.35_160
Howard et al., 2017
.333 .208 .276 .544 .527 nan .440 .557
29 pnasnet_large
Liu et al., 2017
.340 .202 .234 .591 .513 nan .499 .829
47 mobilenet_v2_0.75_192
Howard et al., 2017
.313 .203 .274 .572 .301 nan .526 .687
5 resnet-50_v2
He et al., 2015
.363 .229 .283 .609 .504 nan .553 .756
18 mobilenet_v1_0.5_224
Howard et al., 2017
.346 .208 .275 .578 .540 nan .476 .633
12 mobilenet_v1_0.75_192
Howard et al., 2017
.357 .218 .297 .574 .550 nan .502 .672
62 mobilenet_v2_0.75_96
Howard et al., 2017
.285 .198 .286 .548 .218 nan .459 .588
59 CORnet-Z
Kubilius et al., 2018
.291 .142 .219 .589 .444 nan .356 .470
4 resnet-50_v1
He et al., 2015
.364 .208 .279 .611 .558 nan .526 .752
31 mobilenet_v1_0.75_160
Howard et al., 2017
.339 .222 .280 .557 .528 nan .447 .653
61 mobilenet_v2_1.0_96
Howard et al., 2017
.290 .200 .298 .548 .257 nan .437 .603
21 resnet-34
He et al., 2015
.344 .195 .266 .577 .479 nan .546 .733
21 mobilenet_v1_0.5_192
Howard et al., 2017
.344 .211 .287 .559 .541 nan .465 .617
58 mobilenet_v2_0.75_128
Howard et al., 2017
.301 .215 .283 .583 .251 nan .470 .632
52 bagnet17
Brendel et al., 2019
.307 .209 .240 .587 .450 nan .359 .460
24 vgg-16
Simonyan et al., 2014
.342 .177 .274 .627 .513 nan .461 .715
48 mobilenet_v2_1.4_224
Howard et al., 2017
.312 .198 .273 .555 .342 nan .506 .750
67 mobilenet_v1_0.25_160
Howard et al., 2017
.261 .202 .258 .490 .274 nan .341 .455
28 nasnet_large
Zoph et al., 2017
.341 .197 .229 .604 .529 nan .489 .827
49 mobilenet_v2_0.35_128
Howard et al., 2017
.311 .197 .266 .549 .491 nan .364 .508
15 mobilenet_v1_1.0_160
Howard et al., 2017
.351 .211 .289 .591 .538 nan .475 .680
36 bagnet33
Brendel et al., 2019
.328 .192 .233 .609 .471 nan .462 .590
19 mobilenet_v2_0.5_224
Howard et al., 2017
.345 .216 .256 .578 .531 nan .488 .654
39 alexnet
Krizhevsky et al., 2012
.324 .213 .255 .582 .526 nan .370 .577
59 squeezenet1_0
Iandola et al., 2016
.291 .192 .228 .609 .454 nan .263 .575
9 mobilenet_v1_1.0_224
Howard et al., 2017
.359 .205 .285 .594 .567 nan .505 .709
13 mobilenet_v1_0.75_224
Howard et al., 2017
.355 .209 .273 .593 .553 nan .502 .684
2 resnet-101_v1
He et al., 2015
.365 .207 .274 .600 .545 nan .561 .764
2 densenet-169
Huang et al., 2016
.365 .198 .288 .618 .542 nan .543 .759
19 mobilenet_v1_1.0_128
Howard et al., 2017
.345 .200 .301 .569 .529 nan .471 .652
55 mobilenet_v2_0.75_160
Howard et al., 2017
.303 .195 .254 .573 .317 nan .480 .664
29 mobilenet_v2_0.5_160
Howard et al., 2017
.340 .212 .267 .570 .536 nan .456 .610
50 squeezenet1_1
Iandola et al., 2016
.309 .221 .279 .599 .460 nan .291 .575
5 resnet-152_v1
He et al., 2015
.363 .211 .278 .607 .548 nan .533 .768
42 mobilenet_v2_1.0_224
Howard et al., 2017
.316 .196 .278 .582 .328 nan .509 .718
41 mobilenet_v2_0.75_224
Howard et al., 2017
.317 .210 .275 .592 .276 nan .547 .698
15 inception_v3
Szegedy et al., 2015
.351 .196 .280 .614 .533 nan .481 .780
33 mobilenet_v2_0.5_128
Howard et al., 2017
.334 .196 .272 .577 .530 nan .429 .577
53 mobilenet_v2_0.35_96
Howard et al., 2017
.304 .190 .276 .543 .473 nan .344 .455
13 mobilenet_v1_1.0_192
Howard et al., 2017
.355 .216 .286 .588 .561 nan .476 .700
1 CORnet-S
Kubilius et al., 2018
.397 .184 .191 .611 .533 .316 .545 .747
17 resnet-18
He et al., 2015
.350 .216 .266 .574 .519 nan .524 .698
69 vggface
Parkhi et al., 2015
.232 .178 .225 .566 .346 nan .078
55 mobilenet_v2_1.0_128
Howard et al., 2017
.303 .216 .281 .577 .270 nan .474 .653
35 mobilenet_v2_0.35_224
Howard et al., 2017
.329 .197 .239 .570 .500 nan .466 .603
40 nasnet_mobile
Zoph et al., 2017
.323 .194 .251 .591 .522 nan .382 .740
50 mobilenet_v2_1.3_224
Howard et al., 2017
.309 .218 .269 .528 .327 nan .511 .744
9 densenet-121
Huang et al., 2016
.359 .199 .271 .604 .544 nan .535 .745
24 mobilenet_v2_0.5_192
Howard et al., 2017
.342 .203 .261 .575 .528 nan .483 .639
42 mobilenet_v2_1.0_192
Howard et al., 2017
.316 .199 .276 .580 .315 nan .526 .707
45 mobilenet_v2_1.0_160
Howard et al., 2017
.314 .199 .281 .550 .360 nan .495 .688
24 vgg-19
Simonyan et al., 2014
.342 .183 .266 .619 .491 nan .494 .711
64 inception_resnet_v2
Szegedy et al., 2016
.281 .178 .215 .563 .227 nan .502 .804
32 mobilenet_v1_0.5_160
Howard et al., 2017
.337 .216 .277 .574 .521 nan .434 .591
21 xception
Chollet, 2016
.344 .185 .258 .619 .494 nan .505 .790
36 inception_v4
Szegedy et al., 2016
.328 .158 .204 .576 .497 nan .534 .802
36 mobilenet_v2_0.35_192
Howard et al., 2017
.328 .213 .251 .562 .523 nan .417 .582
65 bagnet9
Brendel et al., 2019
.277 .166 .218 .568 .404 nan .307 .260
63 mobilenet_v1_0.5_128
Howard et al., 2017
.284 .214 .292 .552 .270 nan .380 .563
5 densenet-201
Huang et al., 2016
.363 .206 .284 .604 .544 nan .537 .772
55 inception_v1
Szegedy et al., 2014
.303 .189 .283 .505 .316 nan .524 .698
24 mobilenet_v1_0.75_128
Howard et al., 2017
.342 .198 .289 .585 .534 nan .446 .621
11 resnet-152_v2
He et al., 2015
.358 .209 .277 .606 .516 nan .540 .778
66 mobilenet_v1_0.25_192
Howard et al., 2017
.271 .203 .266 .554 .244 nan .362 .477
8 resnet-101_v2
He et al., 2015
.362 .217 .278 .615 .508 nan .555 .774
45 mobilenet_v1_0.25_224
Howard et al., 2017
.314 .217 .235 .562 .517 nan .352 .498
Model scores on brain benchmarks. Hover over model name to see layer commitments. The more green and bright a cell, the better the model's score. Scores are ceiled, hover the benchmark to see ceilings.

About

The Brain-Score platform aims to yield strong computational models of the ventral stream. We enable researchers to quickly get a sense of how their model scores against standardized brain benchmarks on multiple dimensions and facilitate comparisons to other state-of-the-art models. At the same time, new brain data can quickly be tested against a wide range of models to determine how well existing models explain the data.

Brain-Score is organized by the Brain-Score team in collaboration with researchers and labs worldwide. We are working towards an easy-to-use platform where a model can easily be submitted to yield its scores on a range of brain benchmarks and new benchmarks can be incorporated to challenge the models.

This quantified approach lets us keep track of how close our models are to the brain on a range of experiments (data) using different evaluation techniques (metrics). For more details, please refer to the paper.

Compare

x vs y.
Hover over dots to reveal model details. Scrolling zooms in and out. Click the dropdowns to change x-/y-axis data.

Participate

Challenge the data: Submit a model

If you would like to send in a score, please click here.

Challenge the models: Submit data

If you have neural or behavioral recordings that you would like models to compete on, please get in touch with us to submit data.

Change the evaluation: Submit a metric

If you have an idea for a different way of comparing brain and machine, please send in a pull request.

Citation

If you use Brain-Score in your work, please cite Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?
as well as the respective benchmark sources.
@article{SchrimpfKubilius2018BrainScore,
title={Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?},
author={Martin Schrimpf and Jonas Kubilius and Ha Hong and Najib J. Majaj and Rishi Rajalingham and Elias B. Issa and Kohitij Kar and Pouya Bashivan and Jonathan Prescott-Roy and Franziska Geiger and Kailyn Schmidt and Daniel L. K. Yamins and James J. DiCarlo},
journal={bioRxiv preprint},
year={2018}
}