docs: add Qwen2-1.5B-Instruct as default chat model used in installation tutorials (#2490)

This commit is contained in:
Meng Zhang 2024-06-24 18:38:17 +08:00 committed by GitHub
parent efdadc6a5f
commit b3988d5f32
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 7 additions and 7 deletions

View File

@ -11,7 +11,7 @@ Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge
brew install tabbyml/tabby/tabby
# Start server with StarCoder-1B
tabby serve --device metal --model StarCoder-1B
tabby serve --device metal --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
```
The compute power of M1/M2 is limited and is likely to be sufficient only for individual usage. If you require a shared instance for a team, we recommend considering Docker hosting with CUDA or ROCm. You can find more information about Docker [here](../docker).

View File

@ -22,7 +22,7 @@ services:
tabby:
restart: always
image: tabbyml/tabby
command: serve --model StarCoder-1B --device cuda
command: serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda
volumes:
- "$HOME/.tabby:/data"
ports:
@ -47,7 +47,7 @@ services:
restart: always
image: tabbyml/tabby
entrypoint: /opt/tabby/bin/tabby-cpu
command: serve --model StarCoder-1B
command: serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
volumes:
- "$HOME/.tabby:/data"
ports:

View File

@ -19,7 +19,7 @@ import TabItem from '@theme/TabItem';
```bash title="run.sh"
docker run -it --gpus all \
-p 8080:8080 -v $HOME/.tabby:/data \
tabbyml/tabby serve --model StarCoder-1B --device cuda
tabbyml/tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda
```
</TabItem>
@ -28,7 +28,7 @@ import TabItem from '@theme/TabItem';
```bash title="run.sh"
docker run --entrypoint /opt/tabby/bin/tabby-cpu -it \
-p 8080:8080 -v $HOME/.tabby:/data \
tabbyml/tabby serve --model StarCoder-1B
tabbyml/tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
```
</TabItem>

View File

@ -39,10 +39,10 @@ Open a command prompt or PowerShell window in the directory where the `tabby.exe
Run the following command:
```
# For CPU-only environments
.\tabby.exe serve --model StarCoder-1B
.\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
# For CUDA-enabled environments
.\tabby.exe serve --model StarCoder-1B --device cuda
.\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda
```
You should see a success message similar to the one in the screenshot below. After that, you can visit http://localhost:8080 to access your Tabby instance.