Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

End2end pytorch lightning errors #25525

Closed
2 of 4 tasks
albertsun1 opened this issue Aug 16, 2023 · 5 comments
Closed
2 of 4 tasks

End2end pytorch lightning errors #25525

albertsun1 opened this issue Aug 16, 2023 · 5 comments

Comments

@albertsun1
Copy link

System Info

  • transformers version: 4.31.0
  • Platform: Linux-4.19.0-25-cloud-amd64-x86_64-with-glibc2.28
  • Python version: 3.9.17
  • Huggingface_hub version: 0.16.4
  • Safetensors version: 0.3.2
  • Accelerate version: not installed
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.0.1+cu117 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: Y
  • Using distributed or parallel set-up in script?: Y

Who can help?

@shamanez

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Hey @shamanez, I'm attempting to run sh ./test_run/test_finetune.sh using one GPU. Unfortunately, I've been running into errors with PyTorch lightning. I've tried using PyTorch Lightning version 1.6.4 as recommended in the requirements.txt, but I've gotten errors. This other thread seemed to get the same type of bugs: #22210

  • PyTorch Lightning Versions 1.6/1.6.4/1.6.5: I get the following error:

pytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr scheduler LambdaLR doesn’t follow PyTorch’s LRScheduler API. You should override the [LightningModule.lr](http://lightningmodule.lr/)_scheduler_step hook with your own logic if you are using a custom LR scheduler.

I've also experimented with other versions to see if I could get it fixed, but it still doesn't work:

  • PyTorch Lightning Version 1.5:

pytorch_lightning.utilities.exceptions.MisconfigurationException: You passed devices=auto but haven’t specified accelerator=(‘auto’|'tpu’|'gpu’|'ipu’|'cpu’) for the devices mapping, got accelerator=None.

I tried adding acclerator=‘gpu’ or accelerator=‘auto’ as parameters to the Trainer code, but doing either simply gave me the same error.

  • PyTorch Lightning Versions 1.8/1.9: I get the following error:

module ‘pytorch_lightning’ has no attribute ‘profiler’

Expected behavior

I'd expect the code to train a RAG end-to-end model, but it has this bug before we can start training the model.

@shamanez
Copy link
Contributor

I guess you are also using a newer Transformers version. My advice is to use the latest Transformers and Lightning. I can help with debugging the lightning errors.

@albertsun1
Copy link
Author

Hey, thanks so much for responding so quickly. When I upgraded to the latest stable version of Lightning (2.0.7) and Transformers (4.31), I ran into an issue where the most recent update of PyTorch Lightning removed support for pl.Trainer.add_argparse_args (hpcaitech/ColossalAI#2938). As such, I got the following error:

Traceback (most recent call last):
  File "/home/albertsun/transformers/examples/research_projects/rag-end2end-retriever/finetune_rag.py", line 810, in <module>
    parser = pl.Trainer.add_argparse_args(parser)
AttributeError: type object 'Trainer' has no attribute 'add_argparse_args'

I'm not too familiar with PyTorch Lightning; do you know if there's a work-around for this parser code in finetune_rag.py? Thanks!

@shamanez
Copy link
Contributor

yes, the latest version of Trainer doesn't have such a thing.

You can manually enter.

https://lightning.ai/docs/pytorch/stable/common/trainer.html

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@Rakin061
Copy link

Hello @albertsun1
I'm facing the same issue while training. Could you please give me any light on how could you manage to solve this error ??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants