mirror of
https://github.com/TabbyML/tabby
synced 2024-11-22 00:08:06 +00:00
docs: add Qwen2-1.5B-Instruct as default chat model used in installation tutorials (#2490)
This commit is contained in:
parent
efdadc6a5f
commit
b3988d5f32
@ -11,7 +11,7 @@ Thanks to Apple's Accelerate and CoreML frameworks, we can now run Tabby on edge
|
||||
brew install tabbyml/tabby/tabby
|
||||
|
||||
# Start server with StarCoder-1B
|
||||
tabby serve --device metal --model StarCoder-1B
|
||||
tabby serve --device metal --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
|
||||
```
|
||||
|
||||
The compute power of M1/M2 is limited and is likely to be sufficient only for individual usage. If you require a shared instance for a team, we recommend considering Docker hosting with CUDA or ROCm. You can find more information about Docker [here](../docker).
|
||||
|
@ -22,7 +22,7 @@ services:
|
||||
tabby:
|
||||
restart: always
|
||||
image: tabbyml/tabby
|
||||
command: serve --model StarCoder-1B --device cuda
|
||||
command: serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda
|
||||
volumes:
|
||||
- "$HOME/.tabby:/data"
|
||||
ports:
|
||||
@ -47,7 +47,7 @@ services:
|
||||
restart: always
|
||||
image: tabbyml/tabby
|
||||
entrypoint: /opt/tabby/bin/tabby-cpu
|
||||
command: serve --model StarCoder-1B
|
||||
command: serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
|
||||
volumes:
|
||||
- "$HOME/.tabby:/data"
|
||||
ports:
|
||||
|
@ -19,7 +19,7 @@ import TabItem from '@theme/TabItem';
|
||||
```bash title="run.sh"
|
||||
docker run -it --gpus all \
|
||||
-p 8080:8080 -v $HOME/.tabby:/data \
|
||||
tabbyml/tabby serve --model StarCoder-1B --device cuda
|
||||
tabbyml/tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
@ -28,7 +28,7 @@ import TabItem from '@theme/TabItem';
|
||||
```bash title="run.sh"
|
||||
docker run --entrypoint /opt/tabby/bin/tabby-cpu -it \
|
||||
-p 8080:8080 -v $HOME/.tabby:/data \
|
||||
tabbyml/tabby serve --model StarCoder-1B
|
||||
tabbyml/tabby serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -39,10 +39,10 @@ Open a command prompt or PowerShell window in the directory where the `tabby.exe
|
||||
Run the following command:
|
||||
```
|
||||
# For CPU-only environments
|
||||
.\tabby.exe serve --model StarCoder-1B
|
||||
.\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct
|
||||
|
||||
# For CUDA-enabled environments
|
||||
.\tabby.exe serve --model StarCoder-1B --device cuda
|
||||
.\tabby.exe serve --model StarCoder-1B --chat-model Qwen2-1.5B-Instruct --device cuda
|
||||
```
|
||||
|
||||
You should see a success message similar to the one in the screenshot below. After that, you can visit http://localhost:8080 to access your Tabby instance.
|
||||
|
Loading…
Reference in New Issue
Block a user