Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle OOM for HF Trainer fine-tuning #254

Open
gkumbhat opened this issue Oct 31, 2023 · 1 comment
Open

Handle OOM for HF Trainer fine-tuning #254

gkumbhat opened this issue Oct 31, 2023 · 1 comment
Labels
enhancement New feature or request

Comments

@gkumbhat
Copy link
Collaborator

Description

When we try to fine-tune a model that doesn't fit in the memory as per the configured parameter, trainer currently tries to find appropriate batch size. If it is not able to find appropriate batch size it will error out with following error:

-- Process 0 terminated with the following error:
2023-10-24T19:16:50.309027 [torch:ERRR] Traceback (most recent call last):
2023-10-24T19:16:50.309027 [torch:ERRR]   File "/u/joeolson/.conda/envs/tuning/lib/python3.9/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
2023-10-24T19:16:50.309027 [torch:ERRR]     fn(i, *args)
2023-10-24T19:16:50.309027 [torch:ERRR]   File "/u/joeolson/.conda/envs/tuning/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 370, in _wrap
2023-10-24T19:16:50.309027 [torch:ERRR]     ret = record(fn)(*args_)
2023-10-24T19:16:50.309027 [torch:ERRR]   File "/u/joeolson/.conda/envs/tuning/lib/python3.9/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
2023-10-24T19:16:50.309027 [torch:ERRR]     return f(*args, **kwargs)
2023-10-24T19:16:50.309027 [torch:ERRR]   File "/u/joeolson/git/caikit-nlp2/examples/../caikit_nlp/modules/text_generation/text_generation_local.py", line 617, in _launch_training
2023-10-24T19:16:50.309027 [torch:ERRR]     trainer.train()
2023-10-24T19:16:50.309027 [torch:ERRR]   File "/u/joeolson/.conda/envs/tuning/lib/python3.9/site-packages/transformers/trainer.py", line 1591, in train
2023-10-24T19:16:50.309027 [torch:ERRR]     return inner_training_loop(
2023-10-24T19:16:50.309027 [torch:ERRR]   File "/u/joeolson/.conda/envs/tuning/lib/python3.9/site-packages/accelerate/utils/memory.py", line 134, in decorator
2023-10-24T19:16:50.309027 [torch:ERRR]     raise RuntimeError("No executable batch size found, reached zero.")
2023-10-24T19:16:50.309027 [torch:ERRR] RuntimeError: No executable batch size found, reached zero.

We can use this error to throw OOM to better let user know of the issue.

@gkumbhat gkumbhat added the enhancement New feature or request label Oct 31, 2023
@Ssukriti
Copy link
Collaborator

Ssukriti commented Nov 1, 2023

this can be either OOM or CUDA error as per docstrings of accelerate/utils/memory.py . we can throw a message similar to what is in the docstring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants