earley-parser-minivocab


Benchmark scores

Model rank shown below is with respect to all public models.
X average_language rank X
3 benchmarks
X
0
ceiling
best
median
X behavior_language rank X
1 benchmark
X
0
ceiling
best
median
X Futrell2018-pearsonr v1 [reference] rank X
X
0
ceiling
best
median
sample 0 sample 1 sample 2 sample 3 sample 4 sample 5 sample 6 sample 7 sample 8 sample 9

BrainModel translation

Brain-Score operates on BrainModels. A BrainModel can be treated like an experimental subject, with methods such as recording from a cortical region and performing a behavioral task (see the docs). Many models submitted to Brain-Score are what we call a BaseModel. These are often variants of models from the machine learning community without a particular commitment to the brain and no knowledge of what e.g. "V1" is. To engage with these models and for ease-of-use, such BaseModels are typically converted into BrainModels by making commitments to the brain such as committing layers to cortical regions on separate datasets.

Layer Commitment

BaseModel layers have to be committed to cortical regions. For BaseModels that are automatically translated into BrainModels this is done on separate public data. The same layers are thus used when recording from the same cortical region, e.g. always the same layer for V1 instead of different layers per benchmark.
No layer commitments found for this model. Older submissions might not have stored this information but will be updated when evaluated on new benchmarks.

Visual Angle (Degrees): None

Models have to declare their field-of-view so that stimuli can be displayed like they were displayed to experimental subjects. For instance, if experimental stimuli were shown at 4 degrees and a model's field-of-view is larger than that, then the stimuli are padded such that the core stimulus will make up 4 degrees in the model's field-of-view.
No visual degrees found for this model. The submission might have failed.

Benchmarks bibtex

@proceedings{futrell2018natural,
  title={The Natural Stories Corpus},
  author={Futrell, Richard and Gibson, Edward and Tily, Harry J. and Blank, Idan and Vishnevetsky, Anastasia and
          Piantadosi, Steven T. and Fedorenko, Evelina},
  conference={International Conference on Language Resources and Evaluation (LREC)},
  url={http://www.lrec-conf.org/proceedings/lrec2018/pdf/337.pdf},
  year={2018}
}