Brain Rank

Model

Brain-Score

V1: Your data here!

V2: Your data here!

V4: dicarlo.Majaj2015

IT: dicarlo.Majaj2015

Behavior: dicarlo.Raja...2018

Classification: Imagenet2012

Ceiling .729 .892 .817 .479 100.0
1 densenet-169
Huang et al., 2016
.549 .663 .606 .378 75.9
2 cornet_s
Kubilius et al., 2018
.544 .650 .600 .382 74.7
3 resnet-101_v2
He et al., 2015
.542 .653 .585 .389 77.4
4 densenet-201
Huang et al., 2016
.541 .655 .601 .368 77.2
5 densenet-121
Huang et al., 2016
.541 .657 .597 .369 74.5
6 resnet-152_v2
He et al., 2015
.541 .658 .589 .377 77.8
7 resnet-50_v2
He et al., 2015
.540 .653 .589 .377 75.6
8 xception
Chollet et al., 2016
.533 .671 .565 .361 79.0
9 inception_v2
Szegedy et al., 2015
.532 .646 .593 .357 73.9
10 inception_v1
Szegedy et al., 2014
.532 .649 .583 .362 69.8
11 resnet-18
He et al., 2015
.531 .645 .583 .364 69.8
12 nasnet_mobile
Zoph et al., 2017
.530 .650 .598 .342 74.0
13 pnasnet_large
Liu et al., 2017
.528 .644 .590 .351 82.9
14 inception_resnet_v2
Szegedy et al., 2016
.528 .639 .593 .352 80.4
15 nasnet_large
Zoph et al., 2017
.527 .650 .591 .339 82.7
16 mobilenet_v2_0.75_224
Howard et al., 2017
.527 .613 .590 .377 69.8
17 vgg-19
Simonyan et al., 2014
.525 .672 .566 .338 71.1
18 mobilenet_v2_1.4_224
Howard et al., 2017
.525 .626 .600 .348 75.0
19 inception_v4
Szegedy et al., 2016
.524 .628 .575 .371 80.2
20 mobilenet_v1_1.0_224
Howard et al., 2017
.524 .623 .601 .347 70.9
21 mobilenet_v2_1.3_224
Howard et al., 2017
.523 .619 .595 .356 74.4
22 inception_v3
Szegedy et al., 2015
.523 .646 .587 .335 78.0
23 mobilenet_v2_0.75_192
Howard et al., 2017
.522 .613 .594 .359 68.7
24 resnet-34
He et al., 2015
.522 .629 .559 .378 73.3
25 mobilenet_v2_1.0_192
Howard et al., 2017
.522 .601 .595 .369 70.7
26 vgg-16
Simonyan et al., 2014
.521 .669 .572 .321 71.5
27 mobilenet_v1_0.75_224
Howard et al., 2017
.519 .618 .592 .346 68.4
28 mobilenet_v1_0.75_192
Howard et al., 2017
.517 .620 .592 .340 67.2
29 mobilenet_v2_1.0_224
Howard et al., 2017
.517 .612 .591 .348 71.8
30 mobilenet_v1_1.0_160
Howard et al., 2017
.517 .632 .592 .327 68.0
31 mobilenet_v1_1.0_192
Howard et al., 2017
.516 .629 .594 .325 70.0
32 mobilenet_v2_0.5_224
Howard et al., 2017
.514 .622 .588 .332 65.4
33 mobilenet_v2_1.0_160
Howard et al., 2017
.513 .602 .599 .337 68.8
34 mobilenet_v2_0.5_192
Howard et al., 2017
.510 .616 .586 .329 63.9
35 mobilenet_v2_0.75_160
Howard et al., 2017
.509 .605 .594 .328 66.4
36 mobilenet_v2_0.35_224
Howard et al., 2017
.507 .627 .580 .314 60.3
37 mobilenet_v2_0.5_160
Howard et al., 2017
.506 .625 .582 .310 61.0
38 mobilenet_v1_0.5_224
Howard et al., 2017
.505 .604 .585 .326 63.3
39 mobilenet_v1_0.5_192
Howard et al., 2017
.503 .614 .578 .318 61.7
40 mobilenet_v1_1.0_128
Howard et al., 2017
.502 .623 .575 .308 65.2
41 mobilenet_v1_0.75_160
Howard et al., 2017
.502 .623 .581 .301 65.3
42 mobilenet_v2_1.0_128
Howard et al., 2017
.501 .601 .591 .310 65.3
43 mobilenet_v2_0.35_160
Howard et al., 2017
.500 .619 .577 .303 55.7
44 mobilenet_v2_0.35_192
Howard et al., 2017
.499 .629 .579 .290 58.2
45 mobilenet_v1_0.75_128
Howard et al., 2017
.498 .636 .573 .284 62.1
46 mobilenet_v2_1.0_96
Howard et al., 2017
.496 .613 .578 .297 60.3
47 mobilenet_v2_0.75_128
Howard et al., 2017
.496 .608 .578 .302 63.2
48 mobilenet_v1_0.5_160
Howard et al., 2017
.495 .621 .576 .289 59.1
49 mobilenet_v2_0.75_96
Howard et al., 2017
.494 .620 .575 .286 58.8
50 mobilenet_v2_0.5_128
Howard et al., 2017
.489 .607 .574 .287 57.7
51 alexnet
Krizhevsky et al., 2012
.488 .631 .589 .245 57.7
52 mobilenet_v1_0.5_128
Howard et al., 2017
.479 .623 .558 .256 56.3
53 mobilenet_v1_0.25_224
Howard et al., 2017
.478 .619 .574 .240 49.8
54 mobilenet_v2_0.5_96
Howard et al., 2017
.476 .611 .555 .262 51.2
55 mobilenet_v2_0.35_128
Howard et al., 2017
.472 .613 .550 .253 50.8
56 squeezenet1_1
Iandola et al., 2016
.469 .652 .553 .201 57.5
57 mobilenet_v2_0.35_96
Howard et al., 2017
.465 .610 .540 .244 45.5
58 mobilenet_v1_0.25_192
Howard et al., 2017
.464 .600 .551 .242 47.7
59 mobilenet_v1_0.25_160
Howard et al., 2017
.459 .605 .544 .228 45.5
60 mobilenet_v1_0.25_128
Howard et al., 2017
.455 .619 .527 .220 41.5
61 squeezenet1_0
Iandola et al., 2016
.454 .641 .542 .180 57.5
Model scores on brain benchmarks. The more green and bright a cell, the better the model's score. Hover over cells to see the model layer. Scores are unceiled.

About

The Brain-Score platform aims to yield strong computational models of the ventral stream. We enable researchers to quickly get a sense of how their model scores against standardized brain benchmarks on multiple dimensions and facilitate comparisons to other state-of-the-art models. At the same time, new brain data can quickly be tested against a wide range of models to determine how well existing models explain the data.

Brain-Score is organized by the DiCarlo lab @ MIT in collaboration with other labs worldwide. We are working towards making this into an easy-to-use platform where a model can easily be submitted to yield its scores on a range of brain benchmarks and new benchmarks can be incorporated to challenge the models.

This quantified approach lets us keep track of how close our models are to the brain on a range of experiments (data) using different evaluation techniques (metrics). For more details, please refer to the paper.

Compare

ImageNet top-1 vs Brain-Score.
Hover over dots to reveal model details. Scrolling zooms in and out. Click the dropdowns to change x-/y-axis data.

Participate

Challenge the data: Submit a model

Please get in touch with us to have us score your model (we are working on automating this step).

Challenge the models: Submit data

If you have neural or behavioral recordings that you would like models to compete on, please get in touch with us.

Change the evaluation: Submit a metric

If you have an idea for a different way of comparing brain and machine, please get in touch with us.

Citation

If you use Brain-Score in your work, please cite Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?
@article{SchrimpfKubilius2018BrainScore,
title={Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?},
author={Martin Schrimpf and Jonas Kubilius and Ha Hong and Najib J. Majaj and Rishi Rajalingham and Elias B. Issa and Kohitij Kar and Pouya Bashivan and Jonathan Prescott-Roy and Kailyn Schmidt and Daniel L. K. Yamins and James J. DiCarlo},
journal={bioRxiv preprint},
year={2018}
}