We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
this line of code (line 71 of pipeline_cnn.py as of commit 6f1968a): best_model = torch.hub.load_state_dict_from_url('https://docs.google.com/uc?export=download&id=13CLZoNyvCt2K46UvAyHUqH7099FbnBh_')
errors on _cuda_deserialize.
Assuming that this model specifically is cuda-based and therefore cannot be deserialised for cpu computation.
Should the model be ensured to work across both CUDA and CPU compute?
Same issue from @Phlair raised in autoballs
The text was updated successfully, but these errors were encountered:
see: rg314/autoballs#7
Sorry, something went wrong.
Needs to include map_location to point at current device
map_location
device
import torch best_model = torch.hub.load_state_dict_from_url( 'https://docs.google.com/uc?export=download&id=13CLZoNyvCt2K46UvAyHUqH7099FbnBh_', map_location='cpu')
bug fix for #11
6255f46
rg314
No branches or pull requests
this line of code (line 71 of pipeline_cnn.py as of commit 6f1968a):
best_model = torch.hub.load_state_dict_from_url('https://docs.google.com/uc?export=download&id=13CLZoNyvCt2K46UvAyHUqH7099FbnBh_')
errors on _cuda_deserialize.
Assuming that this model specifically is cuda-based and therefore cannot be deserialised for cpu computation.
Should the model be ensured to work across both CUDA and CPU compute?
Same issue from @Phlair raised in autoballs
The text was updated successfully, but these errors were encountered: