Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory. #24

Closed
ybu-lxd opened this issue May 16, 2024 · 2 comments
Closed

CUDA out of memory. #24

ybu-lxd opened this issue May 16, 2024 · 2 comments

Comments

@ybu-lxd
Copy link

ybu-lxd commented May 16, 2024

from src.efficient_kan.kan import KAN
import torch
net = KAN([1152,1152*4,1152]).to("cuda")
x = torch.rand(size=(4096*4,1152)).to("cuda")
net(x)

I found that if the hidden layer is too large, the problem of CUDA out of memory will occur.

@Arturossi
Copy link

This happens because your model demands more memory than your GPU can provide. Try to use batches and adjust their size or modify your topology so everything can fit in your memory.

This approach most likely will impact in your network results, each one in its own way.

Example for batching that I have done in one of my projects:

from torch.utils.data import Dataset, DataLoader

class CustomDataset(Dataset):
    def __init__(self, features, target):
        self.features = features
        self.target = target

    def __len__(self):
        return len(self.features)

    def __getitem__(self, idx):
        return self.features[idx], self.target[idx]

batch_size = 32 # Example of batch size

loader = DataLoader(
                dataset = CustomDataset(X_train, y_train), 
                batch_size = batch_size, 
                shuffle = True # Shuffle the entire dataset before each epoch to help preventing overfitting
            )

for i , (features, targets) in enumerate(loader):
    # Process your loader here

@Blealtan
Copy link
Owner

See #23.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants