oneuptime/Llama
2024-04-08 14:03:07 +01:00
..
Models male llama work 2023-10-14 16:36:12 +01:00
app.py enable gpu on llama docker 2023-10-18 12:07:37 +01:00
Dockerfile.tpl fix llama docker file. 2023-10-18 11:01:15 +01:00
Readme.md enable gpu on llama docker 2023-10-18 12:07:37 +01:00
requirements.txt fix: Llama/requirements.txt to reduce vulnerabilities 2024-02-06 14:43:53 +00:00
tsconfig.json Update tsconfig.json files with resolveJsonModule option 2024-04-08 14:03:07 +01:00

Llama

Prepare

  • Download models from meta
  • Once the model is downloaded, place them in the Llama/Models folder. Please make sure you also place tokenizer.model and tokenizer_checklist.chk in the same folder.
  • Edit Dockerfile to include the model name in the MODEL_NAME variable.
  • Docker build
docker build -t llama . -f ./Llama/Dockerfile 

Run

For Linux

docker run --gpus all -p 8547:8547 -it -v ./Llama/Models:/app/Models llama 

For MacOS

docker run -p 8547:8547 -it -v ./Llama/Models:/app/Models llama 

Run without a docker conatiner

uvicorn app:app --host 0.0.0.0 --port 8547