How to Fix Common Errors
Our comprehensive troubleshooting guide is designed to help users navigate through common issues they might encounter while using our platform, especially in the submission process. It includes step-by-step solutions for a variety of problems, with the hope that users can quickly find the help they need.
We also continually update this guide based on user feedback and emerging issues, ensuring it remains a valuable tool for resolving challenges efficiently.
If your issue is not listed here, please feel free to open a Github issue, check out the community page, or contact the Brain-Score team directly via slack or email; we are always happy to help!
1) My submission didn't work. What happened?
Description: This is the most common question we get, and our FAQ provides an overview of what could have happened.
Cause: See FAQ page.
Fix: If your submission is not successful, you should get an email with a link that details what went wrong in your build. This is called the Console Log (or Build Log), and it details exactly what happened to cause your submission to fail. We included common Console Log errors below (along with their resolutions), but if your error is not on that list, feel free to reach out to the Brain-Score Community or directly to the Brain-Score Team.
2) SSL Error when trying to run models locally
Description: After implementing a Pytorch model and running score for the first time, you get:
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)
Cause:: This can sometimes occur if Pytorch is currently offline, or having issues
Fix: Add the following lines in the very beginning of your code and try again:
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
3) Console Log error: Server is unable to locate model weights (or another file)
Description: When checking your console (build) log for what might have gone wrong, you see an error akin to this:
ERROR:root:Could not run model mikes_model on benchmark Geirhos2021sketch-error_consistency because of [Errno 2] No such file or directory: '/rdma/vast-rdma/scratch/Sun/score_plugins_vision_env_142/vision/brainscore_vision/models/mikes_model/mikes_model.pth'
Traceback (most recent call last):
Cause: You are trying to save your weights to a local (relative) path, or for some reason the server cannot locate your .pth (weights) file. Sometimes, your working path on your machine might not be the same as the working path our server uses to execute runs.
Fix: We have written code that allows users to download from S3 buckets here, specifically the load_file_from_s3 function. Please use this code in place of your own if you are hosting your weights on S3. We hope to eventually add support for other cloud storage, like Google Drive.
4) Console Log error: No registrations found
Description: When checking your console (build) log for what might have gone wrong, you see an error akin to this:
ERROR:root:Could not run model mikes_model on benchmark ImageNet-C-noise-top1 because of No registrations found for mikes_model
Cause: You are trying to score a model (plugin) that does not exist yet in the Brain-Score ecosystem.
Fix: Please add your model to the model_registry in your submission package's __init__.py. See here (specifically Part 4) for more details.
5) Console Log error: Issue with Tensorflow 1.15
Description: When checking your console (build) log for what might have gone wrong, you see an error akin to this:
ERROR: Could not find a version that satisfies the requirement tensorflow==1.15
Cause: Usually this is caused by users trying to use Tensorflow >1.15. This error occurs when our package installer attempts to reconcile conflicting TF versioning requirements (server needs <=1.15, user requests >1.15). This can also happen if you force your model to use a specific Python version that is >3.7.
Fix: First, make sure you are not forcing your model to use a Python version that is >3.7. Currently, Brain-Score does not support Python version >3.7 or TF versions >1.15, but we will very soon, so stay tuned.
6) Console Log error: len(logits)
Description: When checking your console (build) log for what might have gone wrong, you see an error akin to this:
assert len(logits['neuroid']) == 1000
AssertionError
Cause: This normally arises when Brain-Score tries to score your model on ImageNet engineering benchmarks which often expect a model to be able to do 1000-way classification on ImageNet labels.
Fix: Currently, you will still get a score for your model on neural and behavioral benchmarks, but not those that threw the error (again, this is usually just engineering benchmarks). Your neural and behavioral scores will still show up on your profile.
7) Install Error: AttributeError: EntryPoints
Description: When running the model.py file in the Deep Dives, you get an error similar to this:
“AttributeError: ‘EntryPoints’ object has no attribute ‘get’”
Cause: This can arise if your version of xarray is too recent. We are working on getting this fixed ASAP by bringing our required xarray version up to date.
Fix: Downgrade your xarray package to version 0.18.1 by running the following command:
pip install xarray==0.18.1
This should remove the errors you are seeing.
Our tutorials and FAQs, created with Brain-Score users, aim to cover all bases. However, if issues arise, reach out to our community or consult the troubleshooting guide below for common errors and solutions.