Skip to content

Commit

Permalink
Merge pull request #196 from toolboc/jetsonEmbedded
Browse files Browse the repository at this point in the history
Add NVIDIA Jetson Embedded Device Support
  • Loading branch information
tricktreat authored May 15, 2023
2 parents 20030f8 + 8ec3c4f commit 56c7a88
Show file tree
Hide file tree
Showing 3 changed files with 136 additions and 0 deletions.
4 changes: 4 additions & 0 deletions .dockerignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
.git
server/models/*
!server/models/download.sh
!server/models/download.ps1
103 changes: 103 additions & 0 deletions Dockerfile.jetson
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# NVIDIA Jetson embedded device support with GPU accelerated local model execution for https://github.com/microsoft/JARVIS

# Base image for ffmpeg build env: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-jetpack/tags
FROM nvcr.io/nvidia/l4t-jetpack:r35.2.1 AS build

RUN apt update && apt install -y --no-install-recommends \
build-essential git libass-dev cmake && \
rm -rf /var/lib/apt/lists/*

# Build ffmpeg dependency libraries
RUN git clone https://github.com/jocover/jetson-ffmpeg.git && \
cd jetson-ffmpeg && \
sed -i 's=Libs: -L${libdir} -lnvmpi=Libs: -L${libdir} -lnvmpi -L/usr/lib/aarch64-linux-gnu/tegra -lnvbufsurface=g' nvmpi.pc.in && \
mkdir build && \
cd build && \
cmake .. && \
make -j$(nproc) && \
sudo make install && \
sudo ldconfig && \
git clone git://source.ffmpeg.org/ffmpeg.git -b release/4.2 --depth=1 && \
cd ffmpeg && \
wget https://github.com/jocover/jetson-ffmpeg/raw/master/ffmpeg_nvmpi.patch && \
git apply ffmpeg_nvmpi.patch && \
./configure --enable-nvmpi --enable-libass&& \
make -j$(nproc)

# Base image: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch/tags
# For running JARVIS application layer
from nvcr.io/nvidia/l4t-pytorch:r35.2.1-pth2.0-py3

ENV LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
COPY --from=build /usr/local/lib/libnvmpi.a /usr/local/lib
COPY --from=build /usr/local/lib/libnvmpi.so.1.0.0 /usr/local/lib
COPY --from=build jetson-ffmpeg/build/ffmpeg/ffmpeg /usr/local/bin
COPY --from=build jetson-ffmpeg/build/ffmpeg/ffprobe /usr/local/bin
RUN ln /usr/local/lib/libnvmpi.so.1.0.0 /usr/local/lib/libnvmpi.so
ENV MAKEFLAGS="-j$(nproc)"

COPY ./server/requirements.txt .

# Install model server dependencies
RUN apt update && apt remove -y \
opencv-dev opencv-libs opencv-licenses opencv-main opencv-python opencv-scripts python3-numpy && \
rm -rf /var/lib/apt/lists/*

RUN python3 -m pip install importlib-metadata==4.13.0 && \
python3 -m pip install -r requirements.txt && \
rm -rf requirements.txt

# Update torch deps via reinstall
RUN python3 -m pip install torch==2.0.0a0+ec3941ad.nv23.2 torchaudio==0.13.1+b90d798 torchvision==0.14.1a0+5e8e2f1

# Downgrade opencv-python to v4.5
RUN python3 -m pip install opencv-python==4.5.5.64

# Install nvidia-opencv-dev
RUN apt update && apt install -y --no-install-recommends \
nvidia-opencv-dev && \
rm -rf /var/lib/apt/lists/*

# Fix loading of scikit dep at runtime
ENV LD_PRELOAD='/usr/local/lib/python3.8/dist-packages/scikit_learn.libs/libgomp-d22c30c5.so.1.0.0'

# Install nodejs npm from nodesource
ENV NVM_DIR /root/.nvm
ENV NODE_VERSION v18.16.0
RUN wget -q -O - https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash && \
. "$NVM_DIR/nvm.sh" && \
nvm install $NODE_VERSION && \
nvm alias default $NODE_VERSION && \
nvm use default
ENV NODE_PATH $NVM_DIR/versions/node/$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/$NODE_VERSION/bin:$PATH

WORKDIR /app

# Copy source files
COPY . .

# Install web server dependencies
RUN apt update && apt install -y --no-install-recommends \
xdg-utils && \
rm -rf /var/lib/apt/lists/* && \
cd web && \
npm install

# Download local models
# RUN apt update && apt install -y --no-install-recommends \
# git-lfs && \
# rm -rf /var/lib/apt/lists/* && \
# cd server/models && \
# bash download.sh

# Expose the model server ports
EXPOSE 8004
EXPOSE 8005
# Expose the web server port
EXPOSE 9999

WORKDIR /app/server

# Start the model and web server
CMD python3 models_server.py --config configs/config.default.yaml;
29 changes: 29 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,35 @@ The server-side configuration file is `server/configs/config.default.yaml`, and

On a personal laptop, we recommend the configuration of `inference_mode: hybrid `and `local_deployment: minimal`. But the available models under this setting may be limited due to the instability of remote Hugging Face Inference Endpoints.

## NVIDIA Jetson Embedded Device Support
A [Dockerfile](./Dockerfile.jetson) is included that provides experimental support for [NVIDIA Jetson embedded devices](https://developer.nvidia.com/embedded-computing). This image provides accelerated ffmpeg, pytorch, torchaudio, and torchvision dependencies. To build the docker image, [ensure that the default docker runtime is set to 'nvidia'](https://github.com/NVIDIA/nvidia-docker/wiki/Advanced-topics#default-runtime). A pre-built image is provided at https://hub.docker.com/r/toolboc/nv-jarvis.

```bash
#Build the docker image
docker build --pull --rm -f "Dockerfile.jetson" -t toolboc/nv-jarvis:r35.2.1
```

Due to to memory requirements, JARVIS is required to run on Jetson AGX Orin family devices (64G on-board RAM device preferred) with config options set to:
* `inference_mode: local`
* `local_deployment: standard`

Models and configs are recommended to be provided through a volume mount from the host to the container as shown in the `docker run` step below. It is possible to uncomment the `# Download local models` section of the [Dockerfile](./Dockerfile.jetson) to build a container with models included.

### Start the model server, awesomechat, and web app on Jetson Orin AGX

```bash
# run the container which will automatically start the model server
docker run --name jarvis --net=host --gpus all -v ~/jarvis/configs:/app/server/configs -v ~/src/JARVIS/server/models:/app/server/models toolboc/nv-jarvis:r35.2.1

# (wait for model server to complete initialization)

# start awesome_chat.py
docker exec jarvis python3 awesome_chat.py --config configs/config.default.yaml --mode server

#start the web application (application will be acessible at http://localhost:9999)
docker exec jarvis npm run dev --prefix=/app/web
```

## Screenshots

<p align="center"><img src="./assets/screenshot_q.jpg"><img src="./assets/screenshot_a.jpg"></p>
Expand Down

0 comments on commit 56c7a88

Please sign in to comment.