tabby/CHANGELOG.md

344 lines
14 KiB
Markdown
Raw Normal View History

# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html),
and is generated by [Changie](https://github.com/miniscruff/changie).
## v0.20.0 (2024-11-08)
### Features
* Search results can now be edited directly.
* Allow switching backend chat models in Answer Engine.
* Added a connection test button in the `System` tab to test the connection to the backend LLM server.
### Fixes and Improvements
* Optimized CR-LF inference in code completion. ([#3279](https://github.com/TabbyML/tabby/issues/3279))
* Bumped `llama.cpp` version to `b3995`.
## v0.19.0 (2024-10-30)
### Features
* For Answer Engine, when the file content is reasonably short (e.g., less than 200 lines of code), include the entire file content directly instead of only the chunk ([#3096](https://github.com/TabbyML/tabby/issues/3096)).
* Allowed adding additional languages through the `config.toml` file.
* Allowed customizing the `system_prompt` for Answer Engine.
### Fixes and Improvements
* Redesigned homepage to make team activities (e.g., threads discussed in Answer Engine) discoverable.
* Supported downloading models with multiple partitions (e.g., Qwen-2.5 series).
## v0.18.0 (2024-10-08)
### Notice
* The Chat Side Panel implementation has been redesigned in version 0.18, necessitating an extension version bump for compatibility with 0.18.0.
- VSCode: >= 1.12.0
- IntelliJ: >= 1.8.0
### Features
* User Groups Access Control: Server Administrators can now assign user groups to specific context providers to precisely control which contexts can be accessed by which user groups.
## v0.17.0 (2024-09-10)
### Notice
* We've reworked the `Web` (a beta feature) context provider into the `Developer Docs` context provider. Previously added context in the `Web` tab has been cleared and needs to be manually migrated to `Developer Docs`.
### Features
* Extensive rework has been done in the answer engine search box.
- Developer Docs / Web search is now triggered by `@`.
- Repository Context is now selected using `#`.
* Supports OCaml
## v0.16.1 (2024-08-27)
### Notice
* Starting from this version, we are utilizing websockets for features that require streaming (e.g., Answer Engine and Chat Side Panel). If you are deploying tabby behind a reverse proxy, you may need to configure the proxy to support websockets.
### Features
* Discussion threads in the Answer Engine are now persisted, allowing users to share threads with others.
### Fixed and Improvements
* Fixed an issue where the llama-server subprocess was not being reused when reusing a model for Chat / Completion together (e.g., Codestral-22B) with the local model backend.
* Updated llama.cpp to version b3571 to support the jina series embedding models.
## v0.15.0 (2024-08-08)
### Features
* The search bar in the Code Browser has been reworked and integrated with file navigation functionality.
* GraphQL syntax highlighting support in Code Browser.
### Fixed and Improvements
* For linked GitHub repositories, issues and PRs are now only returned when the repository is selected.
* Fixed GitLab issues/MRs indexing - no longer panics if the description field is null.
* When connecting to localhost model servers, proxy settings are now skipped.
* Allow set code completion's `max_input_length` and `max_output_tokens` in config.toml
2024-07-23 03:34:23 +00:00
## v0.14.0 (2024-07-23)
### Features
* Code search functionality is now available in the `Code Browser` tab. Users can search for code using regex patterns and filter by language, repository, and branch.
* Initial experimental support for natural language to codebase conversation in `Answer Engine`.
### Fixed and Improvements
* Incremental issues / PRs indexing by checking `updated_at`.
* Canonicalize `git_url` before performing a relevant code search. Previously, for git_urls with credentials, the canonicalized git_url was used in the index, but the query still used the raw git_url.
* bump llama.cpp to b3370 - which fixes Qwen2 model series inference
2024-07-10 01:24:36 +00:00
## v0.13.1 (2024-07-10)
### Fixed and Improvements
* Bump llama.cpp version to b3334, supporting Deepseek V2 series models.
* Turn on fast attention for Qwen2-1.5B model to fix the quantization error.
* Properly set number of GPU layers (to zero) when device is CPU.
## v0.13.0 (2024-06-28)
### Features
* Introduced a new Home page featuring the Answer Engine, which activates when the chat model is loaded.
* Enhanced the Answer Engine's context by indexing issues and pull requests.
* Supports web page crawling to further enrich the Answer Engine's context.
* Enabled navigation through various git trees in the git browser.
### Fixed and Improvements
* Turn on sha256 checksum verification for model downloading.
* Added an environment variable `TABBY_HUGGINGFACE_HOST_OVERRIDE` to override `huggingface.co` with compatible mirrors (e.g., `hf-mirror.com`) for model downloading.
* Bumped `llama.cpp` version to [b3166](https://github.com/ggerganov/llama.cpp/releases/tag/3166).
* Improved logging for the `llama.cpp` backend.
* Added support for triggering background jobs in the admin UI.
* Enhanced logging for backend jobs in the admin UI.
2024-06-06 03:47:25 +00:00
## v0.12.0 (2024-05-31)
### Features
* Support Gitlab SSO
* Support connect with Self-Hosted Github / Gitlab
* Repository Context is now utilizied in "Code Browser" as well
### Fixed and Improvements
* llama-server from llama.cpp is now distributed as an individual binary, allowing for more flexible configuration
* HTTP API is out of experimental - you can connect tabby to models through HTTP API. Right now following APIs are supported:
- llama.cpp
- ollama
- mistral / codestral
- openai
2024-05-15 06:21:51 +00:00
## v0.11.1 (2024-05-14)
### Fixed and Improvements
* Fixed display of files where the path contains special characters. ([#2081](https://github.com/TabbyML/tabby/issues/2081))
* Fixed non-admin users not being able to see the repository in Code Browser. ([#2110](https://github.com/TabbyML/tabby/discussions/2110))
## v0.11.0 (05/10/2024)
### Notice
* The `--webserver` flag is now enabled by default in `tabby serve`. To turn off the webserver and only use OSS features, use the `--no-webserver` flag.
2024-05-10 21:31:35 +00:00
* The `/v1beta/chat/completions` endpoint has been moved to `/v1/chat/completions`, while the old endpoint is still available for backward compatibility.
### Features
2024-05-10 21:31:35 +00:00
* Upgraded [llama.cpp](https://github.com/ggerganov/llama.cpp) to version [b2715](https://github.com/ggerganov/llama.cpp/releases/tag/b2715).
* Added support for integrating repositories from GitHub and GitLab using personal access tokens.
* Introduced a new **Activities** page to view user activities.
* Implemented incremental indexing for faster repository context updates.
* Added storage usage statistics in the **System** page.
* Included an `Ask Tabby` feature in the source code browser to provide in-context help from AI.
### Fixes and Improvements
2024-05-10 21:31:35 +00:00
* Changed the default model filename from `q8_0.v2.gguf` to `model.gguf` in MODEL_SPEC.md.
* Excluded activities from deactivated users in reports.
## v0.10.0 (04/22/2024)
### Features
* Introduced the `--chat-device` flag to specify the device used to run the chat model.
* Added a "Reports" tab in the web interface, which provides team-wise statistics for Tabby IDE and Extensions usage (e.g., completions, acceptances).
* Enabled the use of segmented models with the `tabby download` command.
* Implemented the "Go to file" functionality in the Code Browser.
### Fixes and Improvements
* Fix worker unregisteration misfunctioning caused by unmatched address.
* Accurate repository context filtering using fuzzy matching on `git_url` field.
* Support the use of client-side context, including function/class declarations from LSP, and relevant snippets from local changed files.
## v0.9.1 (03/19/2024)
### Fixes and Improvements
* Fix worker registration check against enterprise licenses.
* Fix default value of `disable_client_side_telemetry` when `--webserver` is not used.
## v0.9.0 (03/06/2024)
### Features
* Support for SMTP configuration in the user management system.
* Support for SSO and team management as features in the Enterprise tier.
* Fully managed repository indexing using `--webserver`, with job history logging available in the web interface.
## v0.8.3 (02/06/2024)
### Fixes and Improvements
* Ensure `~/.tabby/repositories` exists for tabby scheduler jobs: https://github.com/TabbyML/tabby/pull/1375
* Add cpu only binary `tabby-cpu` to docker distribution.
## v0.8.0 (02/02/2024)
2023-12-15 05:50:24 +00:00
### Notice
* Due to format changes, re-executing `tabby scheduler --now` is required to ensure that `Code Browser` functions properly.
### Features
2023-12-15 05:50:24 +00:00
2024-01-22 05:38:09 +00:00
* Introducing a preview release of the `Source Code Browser`, featuring visualization of code snippets utilized for code completion in RAG.
* Added a Windows CPU binary distribution.
* Added a Linux ROCm (AMD GPU) binary distribution.
2024-01-22 05:38:09 +00:00
### Fixes and Improvements
2023-12-15 05:50:24 +00:00
2024-01-22 05:38:09 +00:00
* Fixed an issue with cached permanent redirection in certain browsers (e.g., Chrome) when the `--webserver` flag is disabled.
* Introduced the `TABBY_MODEL_CACHE_ROOT` environment variable to individually override the model cache directory.
* The `/v1beta/chat/completions` API endpoint is now compatible with OpenAI's chat completion API.
* Models from our official registry can now be referred to without the TabbyML prefix. Therefore, for the model TabbyML/CodeLlama-7B, you can simply refer to it as CodeLlama-7B everywhere.
2023-12-16 03:22:03 +00:00
## v0.7.0 (12/15/2023)
### Features
2023-12-15 05:54:31 +00:00
* Tabby now includes built-in user management and secure access, ensuring that it is only accessible to your team.
* The `--webserver` flag is a new addition to `tabby serve` that enables secure access to the tabby server. When this flag is on, IDE extensions will need to provide an authorization token to access the instance.
- Some functionalities that are bound to the webserver (e.g. playground) will also require the `--webserver` flag.
### Fixes and Improvements
2023-12-15 05:54:31 +00:00
* Fix https://github.com/TabbyML/tabby/issues/1036, events log should be written to dated json files.
## v0.6.0 (11/27/2023)
chore: release 0.5.0 (#697) * Release 0.5.0-rc.0 http-api-bindings@0.5.0-rc.0 llama-cpp-bindings@0.5.0-rc.0 tabby@0.5.0-rc.0 tabby-common@0.5.0-rc.0 tabby-download@0.5.0-rc.0 tabby-inference@0.5.0-rc.0 tabby-scheduler@0.5.0-rc.0 Generated by cargo-workspaces * fix: docker branch tag should only generate when not empty * Release 0.5.0-rc.1 http-api-bindings@0.5.0-rc.1 llama-cpp-bindings@0.5.0-rc.1 tabby@0.5.0-rc.1 tabby-common@0.5.0-rc.1 tabby-download@0.5.0-rc.1 tabby-inference@0.5.0-rc.1 tabby-scheduler@0.5.0-rc.1 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.2 http-api-bindings@0.5.0-rc.2 llama-cpp-bindings@0.5.0-rc.2 tabby@0.5.0-rc.2 tabby-common@0.5.0-rc.2 tabby-download@0.5.0-rc.2 tabby-inference@0.5.0-rc.2 tabby-scheduler@0.5.0-rc.2 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.3 http-api-bindings@0.5.0-rc.3 llama-cpp-bindings@0.5.0-rc.3 tabby@0.5.0-rc.3 tabby-common@0.5.0-rc.3 tabby-download@0.5.0-rc.3 tabby-inference@0.5.0-rc.3 tabby-scheduler@0.5.0-rc.3 Generated by cargo-workspaces * docs: update change log and docs * fix: collect_snippet should handle NotReady error * Release 0.5.0-rc.4 http-api-bindings@0.5.0-rc.4 llama-cpp-bindings@0.5.0-rc.4 tabby@0.5.0-rc.4 tabby-common@0.5.0-rc.4 tabby-download@0.5.0-rc.4 tabby-inference@0.5.0-rc.4 tabby-scheduler@0.5.0-rc.4 Generated by cargo-workspaces * Release 0.5.0 http-api-bindings@0.5.0 llama-cpp-bindings@0.5.0 tabby@0.5.0 tabby-common@0.5.0 tabby-download@0.5.0 tabby-inference@0.5.0 tabby-scheduler@0.5.0 Generated by cargo-workspaces
2023-11-04 01:02:03 +00:00
### Features
chore: release 0.5.0 (#697) * Release 0.5.0-rc.0 http-api-bindings@0.5.0-rc.0 llama-cpp-bindings@0.5.0-rc.0 tabby@0.5.0-rc.0 tabby-common@0.5.0-rc.0 tabby-download@0.5.0-rc.0 tabby-inference@0.5.0-rc.0 tabby-scheduler@0.5.0-rc.0 Generated by cargo-workspaces * fix: docker branch tag should only generate when not empty * Release 0.5.0-rc.1 http-api-bindings@0.5.0-rc.1 llama-cpp-bindings@0.5.0-rc.1 tabby@0.5.0-rc.1 tabby-common@0.5.0-rc.1 tabby-download@0.5.0-rc.1 tabby-inference@0.5.0-rc.1 tabby-scheduler@0.5.0-rc.1 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.2 http-api-bindings@0.5.0-rc.2 llama-cpp-bindings@0.5.0-rc.2 tabby@0.5.0-rc.2 tabby-common@0.5.0-rc.2 tabby-download@0.5.0-rc.2 tabby-inference@0.5.0-rc.2 tabby-scheduler@0.5.0-rc.2 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.3 http-api-bindings@0.5.0-rc.3 llama-cpp-bindings@0.5.0-rc.3 tabby@0.5.0-rc.3 tabby-common@0.5.0-rc.3 tabby-download@0.5.0-rc.3 tabby-inference@0.5.0-rc.3 tabby-scheduler@0.5.0-rc.3 Generated by cargo-workspaces * docs: update change log and docs * fix: collect_snippet should handle NotReady error * Release 0.5.0-rc.4 http-api-bindings@0.5.0-rc.4 llama-cpp-bindings@0.5.0-rc.4 tabby@0.5.0-rc.4 tabby-common@0.5.0-rc.4 tabby-download@0.5.0-rc.4 tabby-inference@0.5.0-rc.4 tabby-scheduler@0.5.0-rc.4 Generated by cargo-workspaces * Release 0.5.0 http-api-bindings@0.5.0 llama-cpp-bindings@0.5.0 tabby@0.5.0 tabby-common@0.5.0 tabby-download@0.5.0 tabby-inference@0.5.0 tabby-scheduler@0.5.0 Generated by cargo-workspaces
2023-11-04 01:02:03 +00:00
2023-11-19 23:34:25 +00:00
* Add distribution support (running completion / chat model on different process / machine).
* Add conversation history in chat playground.
* Add `/metrics` endpoint for prometheus metrics collection.
2023-11-19 23:34:25 +00:00
### Fixes and Improvements
2023-11-19 23:34:25 +00:00
* Fix the slow repository indexing due to constraint memory arena in tantivy index writer.
* Make `--model` optional, so users can create a chat only instance.
2023-11-19 23:34:25 +00:00
* Add `--parallelism` to control the throughput and VRAM usage: https://github.com/TabbyML/tabby/pull/727
## v0.5.5 (11/09/2023)
2023-11-07 21:27:52 +00:00
### Fixes and Improvements
2023-11-07 21:27:52 +00:00
### Notice
chore: release 0.5.0 (#697) * Release 0.5.0-rc.0 http-api-bindings@0.5.0-rc.0 llama-cpp-bindings@0.5.0-rc.0 tabby@0.5.0-rc.0 tabby-common@0.5.0-rc.0 tabby-download@0.5.0-rc.0 tabby-inference@0.5.0-rc.0 tabby-scheduler@0.5.0-rc.0 Generated by cargo-workspaces * fix: docker branch tag should only generate when not empty * Release 0.5.0-rc.1 http-api-bindings@0.5.0-rc.1 llama-cpp-bindings@0.5.0-rc.1 tabby@0.5.0-rc.1 tabby-common@0.5.0-rc.1 tabby-download@0.5.0-rc.1 tabby-inference@0.5.0-rc.1 tabby-scheduler@0.5.0-rc.1 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.2 http-api-bindings@0.5.0-rc.2 llama-cpp-bindings@0.5.0-rc.2 tabby@0.5.0-rc.2 tabby-common@0.5.0-rc.2 tabby-download@0.5.0-rc.2 tabby-inference@0.5.0-rc.2 tabby-scheduler@0.5.0-rc.2 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.3 http-api-bindings@0.5.0-rc.3 llama-cpp-bindings@0.5.0-rc.3 tabby@0.5.0-rc.3 tabby-common@0.5.0-rc.3 tabby-download@0.5.0-rc.3 tabby-inference@0.5.0-rc.3 tabby-scheduler@0.5.0-rc.3 Generated by cargo-workspaces * docs: update change log and docs * fix: collect_snippet should handle NotReady error * Release 0.5.0-rc.4 http-api-bindings@0.5.0-rc.4 llama-cpp-bindings@0.5.0-rc.4 tabby@0.5.0-rc.4 tabby-common@0.5.0-rc.4 tabby-download@0.5.0-rc.4 tabby-inference@0.5.0-rc.4 tabby-scheduler@0.5.0-rc.4 Generated by cargo-workspaces * Release 0.5.0 http-api-bindings@0.5.0 llama-cpp-bindings@0.5.0 tabby@0.5.0 tabby-common@0.5.0 tabby-download@0.5.0 tabby-inference@0.5.0 tabby-scheduler@0.5.0 Generated by cargo-workspaces
2023-11-04 01:02:03 +00:00
* llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: https://github.com/TabbyML/tabby/pull/645 https://github.com/ggerganov/llama.cpp/pull/3252
* Due to indexing format changes, the `~/.tabby/index` needs to be manually removed before any further runs of `tabby scheduler`.
chore: release 0.5.0 (#697) * Release 0.5.0-rc.0 http-api-bindings@0.5.0-rc.0 llama-cpp-bindings@0.5.0-rc.0 tabby@0.5.0-rc.0 tabby-common@0.5.0-rc.0 tabby-download@0.5.0-rc.0 tabby-inference@0.5.0-rc.0 tabby-scheduler@0.5.0-rc.0 Generated by cargo-workspaces * fix: docker branch tag should only generate when not empty * Release 0.5.0-rc.1 http-api-bindings@0.5.0-rc.1 llama-cpp-bindings@0.5.0-rc.1 tabby@0.5.0-rc.1 tabby-common@0.5.0-rc.1 tabby-download@0.5.0-rc.1 tabby-inference@0.5.0-rc.1 tabby-scheduler@0.5.0-rc.1 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.2 http-api-bindings@0.5.0-rc.2 llama-cpp-bindings@0.5.0-rc.2 tabby@0.5.0-rc.2 tabby-common@0.5.0-rc.2 tabby-download@0.5.0-rc.2 tabby-inference@0.5.0-rc.2 tabby-scheduler@0.5.0-rc.2 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.3 http-api-bindings@0.5.0-rc.3 llama-cpp-bindings@0.5.0-rc.3 tabby@0.5.0-rc.3 tabby-common@0.5.0-rc.3 tabby-download@0.5.0-rc.3 tabby-inference@0.5.0-rc.3 tabby-scheduler@0.5.0-rc.3 Generated by cargo-workspaces * docs: update change log and docs * fix: collect_snippet should handle NotReady error * Release 0.5.0-rc.4 http-api-bindings@0.5.0-rc.4 llama-cpp-bindings@0.5.0-rc.4 tabby@0.5.0-rc.4 tabby-common@0.5.0-rc.4 tabby-download@0.5.0-rc.4 tabby-inference@0.5.0-rc.4 tabby-scheduler@0.5.0-rc.4 Generated by cargo-workspaces * Release 0.5.0 http-api-bindings@0.5.0 llama-cpp-bindings@0.5.0 tabby@0.5.0 tabby-common@0.5.0 tabby-download@0.5.0 tabby-inference@0.5.0 tabby-scheduler@0.5.0 Generated by cargo-workspaces
2023-11-04 01:02:03 +00:00
* `TABBY_REGISTRY` is replaced with `TABBY_DOWNLOAD_HOST` for the github based registry implementation.
### Features
chore: release 0.5.0 (#697) * Release 0.5.0-rc.0 http-api-bindings@0.5.0-rc.0 llama-cpp-bindings@0.5.0-rc.0 tabby@0.5.0-rc.0 tabby-common@0.5.0-rc.0 tabby-download@0.5.0-rc.0 tabby-inference@0.5.0-rc.0 tabby-scheduler@0.5.0-rc.0 Generated by cargo-workspaces * fix: docker branch tag should only generate when not empty * Release 0.5.0-rc.1 http-api-bindings@0.5.0-rc.1 llama-cpp-bindings@0.5.0-rc.1 tabby@0.5.0-rc.1 tabby-common@0.5.0-rc.1 tabby-download@0.5.0-rc.1 tabby-inference@0.5.0-rc.1 tabby-scheduler@0.5.0-rc.1 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.2 http-api-bindings@0.5.0-rc.2 llama-cpp-bindings@0.5.0-rc.2 tabby@0.5.0-rc.2 tabby-common@0.5.0-rc.2 tabby-download@0.5.0-rc.2 tabby-inference@0.5.0-rc.2 tabby-scheduler@0.5.0-rc.2 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.3 http-api-bindings@0.5.0-rc.3 llama-cpp-bindings@0.5.0-rc.3 tabby@0.5.0-rc.3 tabby-common@0.5.0-rc.3 tabby-download@0.5.0-rc.3 tabby-inference@0.5.0-rc.3 tabby-scheduler@0.5.0-rc.3 Generated by cargo-workspaces * docs: update change log and docs * fix: collect_snippet should handle NotReady error * Release 0.5.0-rc.4 http-api-bindings@0.5.0-rc.4 llama-cpp-bindings@0.5.0-rc.4 tabby@0.5.0-rc.4 tabby-common@0.5.0-rc.4 tabby-download@0.5.0-rc.4 tabby-inference@0.5.0-rc.4 tabby-scheduler@0.5.0-rc.4 Generated by cargo-workspaces * Release 0.5.0 http-api-bindings@0.5.0 llama-cpp-bindings@0.5.0 tabby@0.5.0 tabby-common@0.5.0 tabby-download@0.5.0 tabby-inference@0.5.0 tabby-scheduler@0.5.0 Generated by cargo-workspaces
2023-11-04 01:02:03 +00:00
* Improved dashboard UI.
### Fixes and Improvements
chore: release 0.5.0 (#697) * Release 0.5.0-rc.0 http-api-bindings@0.5.0-rc.0 llama-cpp-bindings@0.5.0-rc.0 tabby@0.5.0-rc.0 tabby-common@0.5.0-rc.0 tabby-download@0.5.0-rc.0 tabby-inference@0.5.0-rc.0 tabby-scheduler@0.5.0-rc.0 Generated by cargo-workspaces * fix: docker branch tag should only generate when not empty * Release 0.5.0-rc.1 http-api-bindings@0.5.0-rc.1 llama-cpp-bindings@0.5.0-rc.1 tabby@0.5.0-rc.1 tabby-common@0.5.0-rc.1 tabby-download@0.5.0-rc.1 tabby-inference@0.5.0-rc.1 tabby-scheduler@0.5.0-rc.1 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.2 http-api-bindings@0.5.0-rc.2 llama-cpp-bindings@0.5.0-rc.2 tabby@0.5.0-rc.2 tabby-common@0.5.0-rc.2 tabby-download@0.5.0-rc.2 tabby-inference@0.5.0-rc.2 tabby-scheduler@0.5.0-rc.2 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.3 http-api-bindings@0.5.0-rc.3 llama-cpp-bindings@0.5.0-rc.3 tabby@0.5.0-rc.3 tabby-common@0.5.0-rc.3 tabby-download@0.5.0-rc.3 tabby-inference@0.5.0-rc.3 tabby-scheduler@0.5.0-rc.3 Generated by cargo-workspaces * docs: update change log and docs * fix: collect_snippet should handle NotReady error * Release 0.5.0-rc.4 http-api-bindings@0.5.0-rc.4 llama-cpp-bindings@0.5.0-rc.4 tabby@0.5.0-rc.4 tabby-common@0.5.0-rc.4 tabby-download@0.5.0-rc.4 tabby-inference@0.5.0-rc.4 tabby-scheduler@0.5.0-rc.4 Generated by cargo-workspaces * Release 0.5.0 http-api-bindings@0.5.0 llama-cpp-bindings@0.5.0 tabby@0.5.0 tabby-common@0.5.0 tabby-download@0.5.0 tabby-inference@0.5.0 tabby-scheduler@0.5.0 Generated by cargo-workspaces
2023-11-04 01:02:03 +00:00
* Cpu backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/638
* add `server.completion_timeout` to control the code completion interface timeout: https://github.com/TabbyML/tabby/pull/637
chore: release 0.5.0 (#697) * Release 0.5.0-rc.0 http-api-bindings@0.5.0-rc.0 llama-cpp-bindings@0.5.0-rc.0 tabby@0.5.0-rc.0 tabby-common@0.5.0-rc.0 tabby-download@0.5.0-rc.0 tabby-inference@0.5.0-rc.0 tabby-scheduler@0.5.0-rc.0 Generated by cargo-workspaces * fix: docker branch tag should only generate when not empty * Release 0.5.0-rc.1 http-api-bindings@0.5.0-rc.1 llama-cpp-bindings@0.5.0-rc.1 tabby@0.5.0-rc.1 tabby-common@0.5.0-rc.1 tabby-download@0.5.0-rc.1 tabby-inference@0.5.0-rc.1 tabby-scheduler@0.5.0-rc.1 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.2 http-api-bindings@0.5.0-rc.2 llama-cpp-bindings@0.5.0-rc.2 tabby@0.5.0-rc.2 tabby-common@0.5.0-rc.2 tabby-download@0.5.0-rc.2 tabby-inference@0.5.0-rc.2 tabby-scheduler@0.5.0-rc.2 Generated by cargo-workspaces * fix: handlebar syntax in meta action * Release 0.5.0-rc.3 http-api-bindings@0.5.0-rc.3 llama-cpp-bindings@0.5.0-rc.3 tabby@0.5.0-rc.3 tabby-common@0.5.0-rc.3 tabby-download@0.5.0-rc.3 tabby-inference@0.5.0-rc.3 tabby-scheduler@0.5.0-rc.3 Generated by cargo-workspaces * docs: update change log and docs * fix: collect_snippet should handle NotReady error * Release 0.5.0-rc.4 http-api-bindings@0.5.0-rc.4 llama-cpp-bindings@0.5.0-rc.4 tabby@0.5.0-rc.4 tabby-common@0.5.0-rc.4 tabby-download@0.5.0-rc.4 tabby-inference@0.5.0-rc.4 tabby-scheduler@0.5.0-rc.4 Generated by cargo-workspaces * Release 0.5.0 http-api-bindings@0.5.0 llama-cpp-bindings@0.5.0 tabby@0.5.0 tabby-common@0.5.0 tabby-download@0.5.0 tabby-inference@0.5.0 tabby-scheduler@0.5.0 Generated by cargo-workspaces
2023-11-04 01:02:03 +00:00
* Cuda backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/656
* Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683
2023-11-09 08:36:35 +00:00
* Fix deadlock issue reported in https://github.com/TabbyML/tabby/issues/718
## v0.4.0 (10/24/2023)
### Features
* Supports golang: https://github.com/TabbyML/tabby/issues/553
2023-10-21 19:36:48 +00:00
* Supports ruby: https://github.com/TabbyML/tabby/pull/597
* Supports using local directory for `Repository.git_url`: use `file:///path/to/repo` to specify a local directory.
* A new UI design for webserver.
2023-10-14 00:44:19 +00:00
### Fixes and Improvements
2023-10-21 19:36:48 +00:00
* Improve snippets retrieval by dedup candidates to existing content + snippets: https://github.com/TabbyML/tabby/pull/582
2023-10-14 00:44:19 +00:00
## v0.3.1 (10/21/2023)
### Fixes and improvements
2023-10-21 19:33:05 +00:00
2023-10-21 19:35:12 +00:00
* Fix GPU OOM issue caused the parallelism: https://github.com/TabbyML/tabby/issues/541, https://github.com/TabbyML/tabby/issues/587
2023-10-21 19:33:05 +00:00
* Fix git safe directory check in docker: https://github.com/TabbyML/tabby/issues/569
## v0.3.0 (10/13/2023)
2023-10-03 20:32:13 +00:00
### Features
#### Retrieval-Augmented Code Completion Enabled by Default
2023-10-14 00:29:14 +00:00
The currently supported languages are:
* Rust
* Python
* JavaScript / JSX
* TypeScript / TSX
A blog series detailing the technical aspects of Retrieval-Augmented Code Completion will be published soon. Stay tuned!
### Fixes and Improvements
2023-10-14 00:29:14 +00:00
* Fix [Issue #511](https://github.com/TabbyML/tabby/issues/511) by marking ggml models as optional.
* Improve stop words handling by combining RegexSet into Regex for efficiency.
2023-10-03 20:32:13 +00:00
## v0.2.2 (10/09/2023)
### Fixes and improvements
2023-10-14 00:29:14 +00:00
2023-10-13 20:19:50 +00:00
* Fix a critical issue that might cause request dead locking in ctranslate2 backend (when loading is heavy)
## v0.2.1 (10/03/2023)
### Features
#### Chat Model & Web Interface
2023-10-03 20:32:13 +00:00
We have introduced a new argument, `--chat-model`, which allows you to specify the model for the chat playground located at http://localhost:8080/playground
To utilize this feature, use the following command in the terminal:
```bash
tabby serve --device metal --model TabbyML/StarCoder-1B --chat-model TabbyML/Mistral-7B
```
#### ModelScope Model Registry
2023-10-03 20:32:13 +00:00
Mainland Chinese users have been facing challenges accessing Hugging Face due to various reasons. The Tabby team is actively working to address this issue by mirroring models to a hosting provider in mainland China called modelscope.cn.
```bash
## Download from the Modelscope registry
2023-10-03 20:32:13 +00:00
TABBY_REGISTRY=modelscope tabby download --model TabbyML/WizardCoder-1B
```
### Fixes and improvements
2023-10-03 20:32:13 +00:00
* Implemented more accurate UTF-8 incremental decoding in the [GitHub pull request](https://github.com/TabbyML/tabby/pull/491).
* Fixed the stop words implementation by utilizing RegexSet to isolate the stop word group.
* Improved model downloading logic; now Tabby will attempt to fetch the latest model version if there's a remote change, and the local cache key becomes stale.
2023-10-04 00:37:11 +00:00
* set default num_replicas_per_device for ctranslate2 backend to increase parallelism.
No releases yet, this file will be updated when generating your first release.