oneuptime/Llama
2023-10-16 11:45:15 +01:00
..
__pycache__ make llama work with rest api 2023-10-16 11:45:15 +01:00
Models
app.py make llama work with rest api 2023-10-16 11:45:15 +01:00
Dockerfile.tpl add python app for llama. 2023-10-15 18:14:15 +01:00
Readme.md make llama work with rest api 2023-10-16 11:45:15 +01:00
requirements.txt make llama work with rest api 2023-10-16 11:45:15 +01:00
tsconfig.json fix llama 2023-10-14 17:59:52 +01:00

Llama

Prepare

  • Download models from meta
  • Once the model is downloaded, place them in the Llama/Models folder. Please make sure you also place tokenizer.model and tokenizer_checklist.chk in the same folder.
  • Edit Dockerfile to include the model name in the MODEL_NAME variable.
  • Docker build
docker build -t llama . -f ./Llama/Dockerfile

Run

docker run -it llama

Run without a docker conatiner

uvicorn app:app --host 0.0.0.0 --port 8547