docs: add m1/m2 support status to chat model in model directory

This commit is contained in:
Meng Zhang 2023-10-04 15:04:26 -07:00
parent 6b38b32117
commit e0b2a775d8

View File

@ -23,10 +23,10 @@ We recommend using
To ensure optimal response quality, and given that latency requirements are not stringent in this scenario, we recommend using a model with at least 3B parameters.
| Model ID | License |
| ------------------------------------------------------------------------- | :---------------------------------------------------------------------------------: |
| [TabbyML/Mistral-7B](https://huggingface.co/TabbyML/Mistral-7B) | [Apache 2.0](https://opensource.org/licenses/Apache-2.0) |
| [TabbyML/WizardCoder-3B](https://huggingface.co/TabbyML/WizardCoder-3B) | [OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) |
| Model ID | License | Apple M1/M2 Supports |
| ----------------------------------------------------------------------- | :---------------------------------------------------------------------------------: | :------------------: |
| [TabbyML/Mistral-7B](https://huggingface.co/TabbyML/Mistral-7B) | [Apache 2.0](https://opensource.org/licenses/Apache-2.0) | ✅ |
| [TabbyML/WizardCoder-3B](https://huggingface.co/TabbyML/WizardCoder-3B) | [OpenRAIL-M](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) | ✅ |
## Alternative Registry