Go to file
2024-04-18 16:47:16 -07:00
.github refactor(core): extract prod feature in tabby crate (#1797) 2024-04-10 00:50:45 +00:00
ci test(webserver): add smtp local unittest (#1434) 2024-02-10 23:17:56 -08:00
clients chore(vscode): bump vsce version to 1.5.2. (#1890) 2024-04-17 23:03:31 -07:00
crates fix(tabby): Improve string matching for git_url to better approximate repository matches (#1887) 2024-04-18 23:38:12 +00:00
docker refactor(core): extract prod feature in tabby crate (#1797) 2024-04-10 00:50:45 +00:00
ee refactor(webserver): rename graphql api interface (#1895) 2024-04-18 23:31:25 +00:00
experimental feat: add an experimental supervisor (#630) 2023-10-27 12:46:08 -07:00
python feat(loadtest): add loadtest tools (#906) 2023-11-28 14:34:00 +08:00
rules refactor(webserver): always use schema::Result for graphql api responses (#1892) 2024-04-18 11:07:08 -07:00
website docs: remove curl test from installation guides (#1861) 2024-04-16 15:40:11 -07:00
.dockerignore feat: Add rocm builds and documentation (#1012) 2023-12-13 15:59:04 +08:00
.env refactor(db): Rewrite database code to use sqlx instead of rusqlite (#1340) 2024-02-01 09:38:37 -08:00
.gitattributes chore: exclude ee/tabby-webserver/ui to language stats 2023-11-17 15:50:27 -08:00
.gitignore feat: support use experimental-http device for --chat-model (#1537) 2024-02-25 23:27:39 +00:00
.gitmodules fix(metal): fix metal build of tabby (#1601) 2024-03-01 22:34:31 +00:00
.rustfmt.toml refactor: rust nightly format (#197) 2023-06-05 14:17:07 -07:00
Cargo.lock fix(tabby): Improve string matching for git_url to better approximate repository matches (#1887) 2024-04-18 23:38:12 +00:00
Cargo.toml refactor(webserver): cache list_repositories in CodeService (#1886) 2024-04-18 19:14:12 +00:00
CHANGELOG.md feat(common): Migrate model path to "model.gguf" instead of "q8_0.v2.gguf" (#1847) 2024-04-17 19:38:38 +00:00
CODE_OF_CONDUCT.md docs: Create CODE_OF_CONDUCT.md (#1228) 2024-01-24 00:18:32 -08:00
codecov.yml chore: update codecov.yml 2024-02-08 13:15:46 -08:00
CONTRIBUTING.md ci: support Vulkan build on Windows (#1726) 2024-03-27 07:31:31 +08:00
LICENSE Update LICENSE 2024-01-10 23:16:11 -08:00
Makefile feat: add tabby-email to build tabby email templates (#1573) 2024-02-28 13:17:37 -08:00
MODEL_SPEC.md feat(common): Migrate model path to "model.gguf" instead of "q8_0.v2.gguf" (#1847) 2024-04-17 19:38:38 +00:00
package.json feat(lsp): add example VSCode client for Tabby agent LSP. (#1205) 2024-01-13 08:07:12 -08:00
README.md docs: add back activity chart in README.md 2024-04-18 16:47:16 -07:00
sgconfig.yml refactor(webserver): ensure only services could rely on tabby_db dire… (#1407) 2024-02-07 19:15:53 +00:00
yarn.lock fix(agent): skip fetching server config when server version < 0.9.0. (#1605) 2024-04-17 15:29:12 +08:00

🐾 Tabby

latest release PRs Welcome Docker pulls codecov

Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features:

  • Self-contained, with no need for a DBMS or cloud service.
  • OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE).
  • Supports consumer-grade GPUs.

Open in Playground

Demo

🔥 What's New

Archived
  • 12/15/2023 v0.7.0 released with team management and secured access!
  • 10/15/2023 RAG-based code completion is enabled by detail in v0.3.0🎉! Check out the blogpost explaining how Tabby utilizes repo-level context to get even smarter!
  • 11/27/2023 v0.6.0 released!
  • 11/09/2023 v0.5.5 released! With a redesign of UI + performance improvement.
  • 10/24/2023 Major updates for Tabby IDE plugins across VSCode/Vim/IntelliJ!
  • 10/04/2023 Check out the model directory for the latest models supported by Tabby.
  • 09/18/2023 Apple's M1/M2 Metal inference support has landed in v0.1.1!
  • 08/31/2023 Tabby's first stable release v0.0.1 🥳.
  • 08/28/2023 Experimental support for the CodeLlama 7B.
  • 08/24/2023 Tabby is now on JetBrains Marketplace!

👋 Getting Started

You can find our documentation here.

Run Tabby in 1 Minute

The easiest way to start a Tabby server is by using the following Docker command:

docker run -it \
  --gpus all -p 8080:8080 -v $HOME/.tabby:/data \
  tabbyml/tabby \
  serve --model TabbyML/StarCoder-1B --device cuda

For additional options (e.g inference type, parallelism), please refer to the documentation page.

🤝 Contributing

Full guide at CONTRIBUTING.md;

Get the Code

git clone --recurse-submodules https://github.com/TabbyML/tabby
cd tabby

If you have already cloned the repository, you could run the git submodule update --recursive --init command to fetch all submodules.

Build

  1. Set up the Rust environment by following this tutorial.

  2. Install the required dependencies:

# For MacOS
brew install protobuf

# For Ubuntu / Debian
apt-get install protobuf-compiler libopenblas-dev
  1. Now, you can build Tabby by running the command cargo build.

Start Hacking!

... and don't forget to submit a Pull Request

🌍 Community

  • 🎤 Twitter / X - engage with TabbyML for all things possible
  • 📚 LinkedIn - follow for the latest from the community
  • 💌 Newsletter - subscribe to unlock Tabby insights and secrets

🔆 Activity

Git Repository Activity

🌟 Star History

Star History Chart