Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keep getting OOM #70

Open
zono50 opened this issue Oct 2, 2024 · 1 comment
Open

Keep getting OOM #70

zono50 opened this issue Oct 2, 2024 · 1 comment

Comments

@zono50
Copy link

zono50 commented Oct 2, 2024

I have a NVIDIA Geforce RTX 4070 with 12GB of Vram, using a conda environment with a python venv environment. I am using nvtop to monitor my vram using, and it's using between 35-40% vram til about epoch 2 or 3, then it jumps up to 100 and goes oom.

Is there a memory leak? or am i doing something wrong that is causing this? I have lowered all the settings to the bare minimum.

Epochs - 6
Batch size - 2
Grad accumulation steps - 2
Seconds 7

I'm using cudatoolkit with 11.8 and CUDNN 8.9.2.26. I'm doing the large v3 model, and it always happens between the 2/3 epoch

@Mycohl
Copy link

Mycohl commented Oct 3, 2024

There's no information on this repo about how much VRAM this thing requires.
I can't even run a single epoch with 8GB. Immediate OutOfMemoryError.
Google search shows people variously claiming that "finetune" works on 8GB, or that it requires 16GB.

I can run the regular xtts repo and RVC without issue.

This repo, unlike the regular xtts repo, doesn't appear to support the --lowvram argument.

My only next option is to get it set up on Windows, because the driver apparently allows CUDA to share system RAM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants