2023-03-27 04:09:07 +00:00
< div align = "center" >
2023-03-27 04:45:59 +00:00
2023-03-27 04:09:07 +00:00
# 🐾 Tabby
2023-03-16 09:28:20 +00:00
[![License ](https://img.shields.io/badge/License-Apache_2.0-blue.svg )](https://opensource.org/licenses/Apache-2.0)
2023-03-16 09:26:43 +00:00
[![Code style: black ](https://img.shields.io/badge/code%20style-black-000000.svg )](https://github.com/psf/black)
2023-03-22 15:20:24 +00:00
![Docker build status ](https://img.shields.io/github/actions/workflow/status/TabbyML/tabby/docker.yml?label=docker%20image%20build )
2023-03-27 04:45:59 +00:00
2023-03-29 13:03:07 +00:00
![architecture ](https://user-images.githubusercontent.com/388154/228543840-bff32fac-0802-4dd3-b0d9-2151647dfa6d.png )
2023-03-27 04:45:59 +00:00
2023-03-27 04:09:07 +00:00
< / div >
2023-03-16 09:26:43 +00:00
> **Warning**
2023-03-27 04:09:07 +00:00
> Tabby is still in the alpha phrase
An opensource / on-prem alternative to GitHub Copilot.
## Features
* Self-contained, with no need for a DBMS or cloud service
* Web UI for visualizing and configuration models and MLOps.
* OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
* Consumer level GPU supports (FP-16 weight loading with various optimization).
2023-03-16 10:23:45 +00:00
2023-03-27 04:54:37 +00:00
## Get started
2023-03-27 04:59:08 +00:00
### Docker
2023-03-29 04:57:03 +00:00
The easiest way of getting started is using the official docker image:
2023-03-27 04:54:37 +00:00
```bash
2023-03-29 04:57:03 +00:00
docker run \
-it --rm \
-v ./data:/data \
-v ./data/hf_cache:/root/.cache/huggingface \
-p 5000:5000 \
-p 8501:8501 \
2023-03-29 22:16:05 +00:00
-p 8080:8080 \
-e MODEL_NAME=TabbyML/J-350M \
tabbyml/tabby
2023-03-27 04:54:37 +00:00
```
You can then query the server using `/v1/completions` endpoint:
```bash
curl -X POST http://localhost:5000/v1/completions -H 'Content-Type: application/json' --data '{
"prompt": "def binarySearch(arr, left, right, x):\n mid = (left +"
}'
```
2023-03-29 05:30:17 +00:00
To use the GPU backend (triton) for a faster inference speed, use `deployment/docker-compose.yml` :
2023-03-29 04:57:03 +00:00
```bash
docker-compose up
```
Note: To use GPUs, you need to install the [NVIDIA Container Toolkit ](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html ). We also recommend using NVIDIA drivers with CUDA version 11.8 or higher.
2023-03-27 04:54:37 +00:00
We also provides an interactive playground in admin panel [localhost:8501 ](http://localhost:8501 )
![image ](https://user-images.githubusercontent.com/388154/227792390-ec19e9b9-ebbb-4a94-99ca-8a142ffb5e46.png )
2023-03-22 16:21:07 +00:00
2023-04-02 12:32:49 +00:00
### Skypilot
See [deployment/skypilot/README.md ](./deployment/skypilot/README.md )
## API documentation
2023-03-27 04:59:08 +00:00
Tabby opens an FastAPI server at [localhost:5000 ](https://localhost:5000 ), which embeds an OpenAPI documentation of the HTTP API.
2023-03-27 05:07:41 +00:00
## Development
Go to `development` directory.
2023-03-28 13:06:45 +00:00
```bash
2023-03-27 05:07:41 +00:00
make dev
```
or
2023-03-28 13:06:45 +00:00
```bash
2023-03-28 13:06:28 +00:00
make dev-python # Turn off triton backend (for non-cuda env developers)
2023-03-27 05:07:41 +00:00
```
2023-03-27 04:45:59 +00:00
## TODOs
2023-03-26 16:13:19 +00:00
2023-03-28 08:31:54 +00:00
* [ ] Fine-tuning models on private code repository. [#23 ](https://github.com/TabbyML/tabby/issues/23 )
2023-03-27 04:09:07 +00:00
* [ ] Production ready (Open Telemetry, Prometheus metrics).