From b5d60272ce25f44961e36c9d1c8abb67ea0d228d Mon Sep 17 00:00:00 2001 From: Braden Everson Date: Tue, 26 Mar 2024 18:49:30 -0500 Subject: [PATCH] Update CONTRIBUTING.md --- CONTRIBUTING.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 12fb4ac..a546ddb 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -8,6 +8,16 @@ As a singular college student working on this crate for fun to better understand ### Running code locally +To begin, please follow the same instructions listed in the README to ensure XLA is properly installed and sourced on your machine. + +1) Identify the [latest compatible versions of CUDA and cuDNN](https://www.tensorflow.org/install/source#gpu). Adapt [these instructions](https://medium.com/@gokul.a.krishnan/how-to-install-cuda-cudnn-and-tensorflow-on-ubuntu-22-04-2023-20fdfdb96907) to install the two version of CUDA and cuDNN together. + +2) Install `clang` and `libclang1`. + +3) Download and extract [xla_extension](https://github.com/elixir-nx/xla/releases/tag/v0.6.0). + +4) Make sure `LD_LIBRARY_PATH` includes `/path/to/xla_extension/lib`, and make sure the relevant CUDA paths are also visible to the system. + Assuming you downloaded unda to `/home/user/unda` and you want to run the crate locally, it's relatively self explanatory as you would run any rust project: ```bash