Let's Jump Right In

The Brain-Score platform allows users to score models on public data via the command line on your machine, once installed. It also allows scoring models on all data (public and private) via the website. In the section we will cover how to use the CLI to score a model on public data.

We highly recommend you complete this quickstart before trying to submit a model to the site. Not only will the quickstart show you what to expect from a score, but it will better prepare you to submit a plugin and get a score on all benchmarks!


git clone https://github.com/brain-score/vision.git
cd vision
python -m pip install --upgrade pip.
python -m pip install -e .
            

Step 1: Install Packages

In order to use Brain-Score on your machine, you need to install it. Luckily, we have tried to drastically simplify this process (we strongly recommend setting up a Virtual Environment (an example shown here) for all your Brain-Score projects, but this is not required). Run the three commands to the left in the command line of your choosing (example is for Unix-based machines).

This will pull the most recent copy of Brain-Score into your local machine and install all the necessary packages.

Step 2: Run a Model on a Benchmark (via CLI)

Next, make sure your working directory is /brain-score (you can confirm this is the case with a pwd call), and run the three commands below to score the model pixels on a sample benchmark’s publicly available data MajajHong2015public.IT-pls:


python brainscore_vision score --model_identifier='pixels' --benchmark_identifier='MajajHong2015public.IT-pls'
    

Upon scoring completion, you should get a message like below, indicating what the score is.


<xarray.Score ()>
array(0.07637264)
Attributes:
    error:                 <xarray.Score ()>\narray(0.00548197)
    raw:                   <xarray.Score ()>\narray(0.22545106)\nAttributes:\...
    ceiling:               <xarray.DataArray ()>\narray(0.81579938)\nAttribut...
    model_identifier:      pixels
    benchmark_identifier:  MajajHong2015public.IT-pls
    comment:               layers: {'IT': 'pixels'}


Process finished with exit code 0
    

Let’s break down what these numbers mean. First off, your score is 0.07637264, the first number in the first xarray listed. Next, you can see a few other attributes: error, with value 0.00548197, which represents the error of your the score estimate; raw with value 0.22545106 which represents the unceiled score that your model achieved on the MajajHong2015public.IT-pls benchmark; and ceiling, with value 0.81579938, which is the highest score a perfect model is expected to get.

In this case, the MajajHong2015public.IT-pls benchmark uses the standard NeuralBenchmark which ceiling-normalizes with explained variance (r(X, Y) / r(Y, Y))^2; see how this is done here. This is how the final score is calculated from the ceiling and raw scores.

There is also more metadata listed in this score object, such as model_identifier, benchmark_identifier, and comment.

Please note that compute times may vary; running on a 2021 Macbook Pro M1 Max takes about 10 minutes.

Further Learning Resources

If you would like to know more about Brain-Score, please visit our Deep Dive series! These are guided tours that walk you through how to put Brain-Score to work for you.

In Deep Dive 1, we will cover the submission package, and you can use this as a formatting guide for your future submissions.

In Deep Dive 2, we will walkthough what a custom model submission looks like, and how to submit one.

Finally, in Deep Dive 3, we will cover how to submit a plugin via a Github Pull Request (PR).

Optional: Scoring a Language Model

The process is very similar for scoring a language model. First, install the needed packages in Step 1 above, but just change all occurrences of vision to language, i.e: brainscore_vision becomes brainscore_language. Next, simply call the language equivalent to the above vision command, which would be:


python brainscore_language score --model_identifier='distilgpt2' --benchmark_identifier='Futrell2018-pearsonr'
        

Where, in this case, we are calling the brainscore-language library to score the language model distilgpt2 on the language benchmark Futrell2018-pearsonr.

Stuck?

Our tutorials and FAQs, created with Brain-Score users, aim to cover all bases. However, if issues arise, reach out to our community or consult the troubleshooting guide below for common errors and solutions.

Something Not Right?

If you come across any bugs, please feel free to submit an Issue on Github. One of our team members will be happy to investigate any issues.