oneuptime/LLM
Simon Larsen 658cb9fee3 refactor: Update URL for validating secret key in app.py
The code changes in `app.py` update the URL for validating the secret key. The previous URL was using the endpoint `/api/code-repository/is-valid/{secretKey}`, and it has been changed to `/api/copilot-code-repository/is-valid/{secretKey}`. This change ensures that the correct endpoint is used for validating the secret key in the OneUptime service.
2024-07-09 19:38:29 +00:00
..
.dockerignore chore: Update Dockerfile and docker-compose.llm.yml for LLM models 2024-06-28 17:59:43 +00:00
app.py refactor: Update URL for validating secret key in app.py 2024-07-09 19:38:29 +00:00
Dockerfile.tpl chore: Update Dockerfile and docker-compose.llm.yml for LLM models 2024-06-28 17:59:43 +00:00
Readme.md chore: Update Hugging Face clone URL in test-release.yaml 2024-06-28 13:09:54 +01:00
requirements.txt chore: Update LLM Dockerfile and build process 2024-06-28 12:14:49 +01:00
tsconfig.json chore: Update Llama Models directory name to LLM 2024-06-28 11:32:44 +01:00

LLM

Development Guide

Step 1: Downloading Model from Hugging Face

Please make sure you have git lfs installed before cloning the model.

git lfs install
cd ./LLM/Models
# Here we are downloading the Meta-Llama-3-8B-Instruct model
git clone https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct

You will be asked for username and password. Please use Hugging Face Username as Username and, Hugging Face API Token as Password.

Step 2: Install Docker.

Install Docker and Docker Compose

sudo apt-get update
sudo curl -sSL https://get.docker.com/ | sh  

Install Rootless Docker

sudo apt-get install -y uidmap
dockerd-rootless-setuptool.sh install

See if the installation works

docker --version
docker ps 

# You should see no containers running, but you should not see any errors. 

Step 3: Insall nvidia drivers on the machine to use GPU

Step 4: Run the test workload to see if GPU is connected to Docker.

docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark

You have configured the machine to use GPU with Docker.

Build

  • Download models from meta
  • Once the model is downloaded, place them in the Llama/Models folder. Please make sure you also place tokenizer.model and tokenizer_checklist.chk in the same folder.
  • Edit Dockerfile to include the model name in the MODEL_NAME variable.
  • Docker build
npm run build-ai

Run

npm run start-ai    

After you start, run nvidia-smi to see if the GPU is being used. You should see the python process running on the GPU.