You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This happens because your model demands more memory than your GPU can provide. Try to use batches and adjust their size or modify your topology so everything can fit in your memory.
This approach most likely will impact in your network results, each one in its own way.
Example for batching that I have done in one of my projects:
fromtorch.utils.dataimportDataset, DataLoaderclassCustomDataset(Dataset):
def__init__(self, features, target):
self.features=featuresself.target=targetdef__len__(self):
returnlen(self.features)
def__getitem__(self, idx):
returnself.features[idx], self.target[idx]
batch_size=32# Example of batch sizeloader=DataLoader(
dataset=CustomDataset(X_train, y_train),
batch_size=batch_size,
shuffle=True# Shuffle the entire dataset before each epoch to help preventing overfitting
)
fori , (features, targets) inenumerate(loader):
# Process your loader here
I found that if the hidden layer is too large, the problem of CUDA out of memory will occur.
The text was updated successfully, but these errors were encountered: