Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consistent and flexible configuration for GPU acceleration #119

Open
MedericFourmy opened this issue Jan 19, 2024 · 0 comments
Open

Consistent and flexible configuration for GPU acceleration #119

MedericFourmy opened this issue Jan 19, 2024 · 0 comments

Comments

@MedericFourmy
Copy link
Collaborator

MedericFourmy commented Jan 19, 2024

We need to offer a more consistent way to define which type of hardware acceleration devices happypose runs on. Here are the current and desired behaviors I have in mind so far:

  • Choose whether torch runs on cpu on gpu
    • current: use cuda if available, use cpu otherwise
    • desired: user should be able define the preferred acceleration device (e.g. through environment variable)
  • Choose whether to run the renderers on gpu or cpu.
    • current: for pybullet, the choice is possible. For panda3d, use GPU if cuda is available.
    • desired: configurable for both
  • AMD ROCm support: Some computing cluster (e.g. LUMI) require the use of AMD GPUs. ardware acceleration through AMD ROCm (~ equivalent of Nvidia Cuda) should be supported by latest torch versions. The refactoring should take this into account.

EDIT: ROCm support seems to "masquarade" itself as cuda so there should not be any adaptation in this regard.
See this gist

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant