Scores on benchmarks

Model rank shown below is with respect to all public models.
.500 average_language rank 9
5 benchmarks
.500
0
ceiling
best
median
1.0 neural_language rank 1
4 benchmarks
1.0
0
ceiling
best
median
1.0 Pereira2018-linear rank 1
2 benchmarks
1.0
0
ceiling
best
median
1.0 Pereira2018.243sentences-linear v1 rank 1
1.0
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
1.0 Pereira2018.384sentences-linear v1 rank 1
1.0
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
1.0 Fedorenko2016-linear_pearsonr v3 rank 1
1.0
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9
1.0 Fedorenko2016-ridge_pearsonr v3 rank 1
1.0
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

How to use

from brainscore_language import load_model
model = load_model("oasm-sigma0.5")
model.start_task(...)
model.start_recording(...)
model.look_at(...)

Brain Encoding Response Generator (BERG)

Through the BERG you can easily generate neural responses to text sentences of your choice using any Brain-Score language model.

For more information on how to use BERG, see the documentation and tutorial.

Benchmarks bibtex

@proceedings{futrell2018natural,
  title={The Natural Stories Corpus},
  author={Futrell, Richard and Gibson, Edward and Tily, Harry J. and Blank, Idan and Vishnevetsky, Anastasia and
          Piantadosi, Steven T. and Fedorenko, Evelina},
  conference={International Conference on Language Resources and Evaluation (LREC)},
  url={http://www.lrec-conf.org/proceedings/lrec2018/pdf/337.pdf},
  year={2018}
}
        @inproceedings{gauthier-etal-2020-syntaxgym,
    title = "{S}yntax{G}ym: An Online Platform for Targeted Evaluation of Language Models",
    author = "Gauthier, Jon and Hu, Jennifer and Wilcox, Ethan and Qian, Peng and Levy, Roger",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.acl-demos.10",
    pages = "70--76",
    abstract = "Targeted syntactic evaluations have yielded insights into the generalizations learned by neural network language models. However, this line of research requires an uncommon confluence of skills: both the theoretical knowledge needed to design controlled psycholinguistic experiments, and the technical proficiency needed to train and deploy large-scale language models. We present SyntaxGym, an online platform designed to make targeted evaluations accessible to both experts in NLP and linguistics, reproducible across computing environments, and standardized following the norms of psycholinguistic experimental design. This paper releases two tools of independent value for the computational linguistics community: 1. A website, syntaxgym.org, which centralizes the process of targeted syntactic evaluation and provides easy tools for analysis and visualization; 2. Two command-line tools, {`}syntaxgym{`} and {`}lm-zoo{`}, which allow any user to reproduce targeted syntactic evaluations and general language model inference on their own machine.",
}
        

Layer Commitment

No layer commitments found for this model. Older submissions might not have stored this information but will be updated when evaluated on new benchmarks.

Visual Angle

None degrees