Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Specific use of cuda model not allowing cpu compute #11

Closed
rg314 opened this issue Mar 19, 2021 · 2 comments
Closed

Specific use of cuda model not allowing cpu compute #11

rg314 opened this issue Mar 19, 2021 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@rg314
Copy link
Owner

rg314 commented Mar 19, 2021

this line of code (line 71 of pipeline_cnn.py as of commit 6f1968a):
best_model = torch.hub.load_state_dict_from_url('https://docs.google.com/uc?export=download&id=13CLZoNyvCt2K46UvAyHUqH7099FbnBh_')

errors on _cuda_deserialize.

Assuming that this model specifically is cuda-based and therefore cannot be deserialised for cpu computation.

Should the model be ensured to work across both CUDA and CPU compute?

Same issue from @Phlair raised in autoballs

@Phlair
Copy link
Collaborator

Phlair commented Mar 19, 2021

see:
rg314/autoballs#7

@rg314
Copy link
Owner Author

rg314 commented Mar 19, 2021

Needs to include map_location to point at current device

import torch
best_model = torch.hub.load_state_dict_from_url(
'https://docs.google.com/uc?export=download&id=13CLZoNyvCt2K46UvAyHUqH7099FbnBh_', 
map_location='cpu')

rg314 added a commit that referenced this issue Mar 19, 2021
@rg314 rg314 self-assigned this Mar 19, 2021
@rg314 rg314 added the bug Something isn't working label Mar 19, 2021
@rg314 rg314 closed this as completed Mar 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants