From b8a52473fdd39c48d9cb22c17913262345b1f052 Mon Sep 17 00:00:00 2001 From: Meng Zhang Date: Fri, 10 May 2024 15:11:25 -0700 Subject: [PATCH] chore: adopt changie for automatic changelog management (#2092) * chore: adopt changie for automatic changelog management * update --- .changes/header.tpl.md | 6 ++ .changes/unreleased/.gitkeep | 0 .changes/v0.11.0.md | 195 +++++++++++++++++++++++++++++++++++ .changie.yaml | 22 ++++ CHANGELOG.md | 101 ++++++++++-------- 5 files changed, 279 insertions(+), 45 deletions(-) create mode 100644 .changes/header.tpl.md create mode 100644 .changes/unreleased/.gitkeep create mode 100644 .changes/v0.11.0.md create mode 100644 .changie.yaml diff --git a/.changes/header.tpl.md b/.changes/header.tpl.md new file mode 100644 index 000000000..df8faa7b2 --- /dev/null +++ b/.changes/header.tpl.md @@ -0,0 +1,6 @@ +# Changelog +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html), +and is generated by [Changie](https://github.com/miniscruff/changie). diff --git a/.changes/unreleased/.gitkeep b/.changes/unreleased/.gitkeep new file mode 100644 index 000000000..e69de29bb diff --git a/.changes/v0.11.0.md b/.changes/v0.11.0.md new file mode 100644 index 000000000..9a88a0712 --- /dev/null +++ b/.changes/v0.11.0.md @@ -0,0 +1,195 @@ +## v0.11.0 (05/10/2024) + +### Notice + +* The `--webserver` flag is now enabled by default in `tabby serve`. To turn off the webserver and only use OSS features, use the `--no-webserver` flag. +* The `/v1beta/chat/completions` endpoint has been moved to `/v1/chat/completions`, while the old endpoint is still available for backward compatibility. + +### Features +* Upgraded [llama.cpp](https://github.com/ggerganov/llama.cpp) to version [b2715](https://github.com/ggerganov/llama.cpp/releases/tag/b2715). +* Added support for integrating repositories from GitHub and GitLab using personal access tokens. +* Introduced a new **Activities** page to view user activities. +* Implemented incremental indexing for faster repository context updates. +* Added storage usage statistics in the **System** page. +* Included an `Ask Tabby` feature in the source code browser to provide in-context help from AI. + +### Fixes and Improvements +* Changed the default model filename from `q8_0.v2.gguf` to `model.gguf` in MODEL_SPEC.md. +* Excluded activities from deactivated users in reports. + +## v0.10.0 (04/22/2024) + +### Features +* Introduced the `--chat-device` flag to specify the device used to run the chat model. +* Added a "Reports" tab in the web interface, which provides team-wise statistics for Tabby IDE and Extensions usage (e.g., completions, acceptances). +* Enabled the use of segmented models with the `tabby download` command. +* Implemented the "Go to file" functionality in the Code Browser. + +### Fixes and Improvements +* Fix worker unregisteration misfunctioning caused by unmatched address. +* Accurate repository context filtering using fuzzy matching on `git_url` field. +* Support the use of client-side context, including function/class declarations from LSP, and relevant snippets from local changed files. + +## v0.9.1 (03/19/2024) + +### Fixes and Improvements +* Fix worker registration check against enterprise licenses. +* Fix default value of `disable_client_side_telemetry` when `--webserver` is not used. + +## v0.9.0 (03/06/2024) + +### Features + +* Support for SMTP configuration in the user management system. +* Support for SSO and team management as features in the Enterprise tier. +* Fully managed repository indexing using `--webserver`, with job history logging available in the web interface. + +## v0.8.3 (02/06/2024) + +### Fixes and Improvements + +* Ensure `~/.tabby/repositories` exists for tabby scheduler jobs: https://github.com/TabbyML/tabby/pull/1375 +* Add cpu only binary `tabby-cpu` to docker distribution. + +## v0.8.0 (02/02/2024) + +### Notice + +* Due to format changes, re-executing `tabby scheduler --now` is required to ensure that `Code Browser` functions properly. + +### Features + +* Introducing a preview release of the `Source Code Browser`, featuring visualization of code snippets utilized for code completion in RAG. +* Added a Windows CPU binary distribution. +* Added a Linux ROCm (AMD GPU) binary distribution. + +### Fixes and Improvements + +* Fixed an issue with cached permanent redirection in certain browsers (e.g., Chrome) when the `--webserver` flag is disabled. +* Introduced the `TABBY_MODEL_CACHE_ROOT` environment variable to individually override the model cache directory. +* The `/v1beta/chat/completions` API endpoint is now compatible with OpenAI's chat completion API. +* Models from our official registry can now be referred to without the TabbyML prefix. Therefore, for the model TabbyML/CodeLlama-7B, you can simply refer to it as CodeLlama-7B everywhere. + +## v0.7.0 (12/15/2023) + +### Features + +* Tabby now includes built-in user management and secure access, ensuring that it is only accessible to your team. +* The `--webserver` flag is a new addition to `tabby serve` that enables secure access to the tabby server. When this flag is on, IDE extensions will need to provide an authorization token to access the instance. + - Some functionalities that are bound to the webserver (e.g. playground) will also require the `--webserver` flag. + + +### Fixes and Improvements + +* Fix https://github.com/TabbyML/tabby/issues/1036, events log should be written to dated json files. + +## v0.6.0 (11/27/2023) + +### Features + +* Add distribution support (running completion / chat model on different process / machine). +* Add conversation history in chat playground. +* Add `/metrics` endpoint for prometheus metrics collection. + +### Fixes and Improvements + +* Fix the slow repository indexing due to constraint memory arena in tantivy index writer. +* Make `--model` optional, so users can create a chat only instance. +* Add `--parallelism` to control the throughput and VRAM usage: https://github.com/TabbyML/tabby/pull/727 + +## v0.5.5 (11/09/2023) + +### Fixes and Improvements + +### Notice + +* llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: https://github.com/TabbyML/tabby/pull/645 https://github.com/ggerganov/llama.cpp/pull/3252 +* Due to indexing format changes, the `~/.tabby/index` needs to be manually removed before any further runs of `tabby scheduler`. +* `TABBY_REGISTRY` is replaced with `TABBY_DOWNLOAD_HOST` for the github based registry implementation. + +### Features + +* Improved dashboard UI. + +### Fixes and Improvements + +* Cpu backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/638 +* add `server.completion_timeout` to control the code completion interface timeout: https://github.com/TabbyML/tabby/pull/637 +* Cuda backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/656 +* Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683 +* Fix deadlock issue reported in https://github.com/TabbyML/tabby/issues/718 + +## v0.4.0 (10/24/2023) + +### Features + +* Supports golang: https://github.com/TabbyML/tabby/issues/553 +* Supports ruby: https://github.com/TabbyML/tabby/pull/597 +* Supports using local directory for `Repository.git_url`: use `file:///path/to/repo` to specify a local directory. +* A new UI design for webserver. + +### Fixes and Improvements + +* Improve snippets retrieval by dedup candidates to existing content + snippets: https://github.com/TabbyML/tabby/pull/582 + +## v0.3.1 (10/21/2023) +### Fixes and improvements + +* Fix GPU OOM issue caused the parallelism: https://github.com/TabbyML/tabby/issues/541, https://github.com/TabbyML/tabby/issues/587 +* Fix git safe directory check in docker: https://github.com/TabbyML/tabby/issues/569 + +## v0.3.0 (10/13/2023) + +### Features +#### Retrieval-Augmented Code Completion Enabled by Default + +The currently supported languages are: + +* Rust +* Python +* JavaScript / JSX +* TypeScript / TSX + +A blog series detailing the technical aspects of Retrieval-Augmented Code Completion will be published soon. Stay tuned! + +### Fixes and Improvements + +* Fix [Issue #511](https://github.com/TabbyML/tabby/issues/511) by marking ggml models as optional. +* Improve stop words handling by combining RegexSet into Regex for efficiency. + +## v0.2.2 (10/09/2023) +### Fixes and improvements + +* Fix a critical issue that might cause request dead locking in ctranslate2 backend (when loading is heavy) + +## v0.2.1 (10/03/2023) +### Features +#### Chat Model & Web Interface + +We have introduced a new argument, `--chat-model`, which allows you to specify the model for the chat playground located at http://localhost:8080/playground + +To utilize this feature, use the following command in the terminal: + +```bash +tabby serve --device metal --model TabbyML/StarCoder-1B --chat-model TabbyML/Mistral-7B +``` + +#### ModelScope Model Registry + +Mainland Chinese users have been facing challenges accessing Hugging Face due to various reasons. The Tabby team is actively working to address this issue by mirroring models to a hosting provider in mainland China called modelscope.cn. + +```bash +## Download from the Modelscope registry +TABBY_REGISTRY=modelscope tabby download --model TabbyML/WizardCoder-1B +``` + +### Fixes and improvements + +* Implemented more accurate UTF-8 incremental decoding in the [GitHub pull request](https://github.com/TabbyML/tabby/pull/491). +* Fixed the stop words implementation by utilizing RegexSet to isolate the stop word group. +* Improved model downloading logic; now Tabby will attempt to fetch the latest model version if there's a remote change, and the local cache key becomes stale. +* set default num_replicas_per_device for ctranslate2 backend to increase parallelism. + + + +No releases yet, this file will be updated when generating your first release. \ No newline at end of file diff --git a/.changie.yaml b/.changie.yaml new file mode 100644 index 000000000..9679ab87c --- /dev/null +++ b/.changie.yaml @@ -0,0 +1,22 @@ +changesDir: .changes +unreleasedDir: unreleased +headerPath: header.tpl.md +changelogPath: CHANGELOG.md +versionExt: md +versionFormat: '## {{.Version}} ({{.Time.Format "2006-01-02"}})' +kindFormat: '### {{.Kind}}' +changeFormat: '* {{.Body}}' +kinds: +- label: Notice + auto: minor +- label: Features + auto: minor +- label: Fixed and Improvements + auto: patch +newlines: + afterChangelogHeader: 1 + afterKind: 1 + afterChangelogVersion: 1 + beforeKind: 1 + endOfVersion: 1 +envPrefix: CHANGIE_ \ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md index 76a8db151..0e23d4cfd 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,11 +1,18 @@ -# v0.11.0 (05/10/2024) +# Changelog +All notable changes to this project will be documented in this file. -## Notice +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html), +and is generated by [Changie](https://github.com/miniscruff/changie). + +## v0.11.0 (05/10/2024) + +### Notice * The `--webserver` flag is now enabled by default in `tabby serve`. To turn off the webserver and only use OSS features, use the `--no-webserver` flag. * The `/v1beta/chat/completions` endpoint has been moved to `/v1/chat/completions`, while the old endpoint is still available for backward compatibility. -## Features +### Features * Upgraded [llama.cpp](https://github.com/ggerganov/llama.cpp) to version [b2715](https://github.com/ggerganov/llama.cpp/releases/tag/b2715). * Added support for integrating repositories from GitHub and GitLab using personal access tokens. * Introduced a new **Activities** page to view user activities. @@ -13,105 +20,105 @@ * Added storage usage statistics in the **System** page. * Included an `Ask Tabby` feature in the source code browser to provide in-context help from AI. -## Fixes and Improvements +### Fixes and Improvements * Changed the default model filename from `q8_0.v2.gguf` to `model.gguf` in MODEL_SPEC.md. * Excluded activities from deactivated users in reports. -# v0.10.0 (04/22/2024) +## v0.10.0 (04/22/2024) -## Features +### Features * Introduced the `--chat-device` flag to specify the device used to run the chat model. * Added a "Reports" tab in the web interface, which provides team-wise statistics for Tabby IDE and Extensions usage (e.g., completions, acceptances). * Enabled the use of segmented models with the `tabby download` command. * Implemented the "Go to file" functionality in the Code Browser. -## Fixes and Improvements +### Fixes and Improvements * Fix worker unregisteration misfunctioning caused by unmatched address. * Accurate repository context filtering using fuzzy matching on `git_url` field. * Support the use of client-side context, including function/class declarations from LSP, and relevant snippets from local changed files. -# v0.9.1 (03/19/2024) +## v0.9.1 (03/19/2024) -## Fixes and Improvements +### Fixes and Improvements * Fix worker registration check against enterprise licenses. * Fix default value of `disable_client_side_telemetry` when `--webserver` is not used. -# v0.9.0 (03/06/2024) +## v0.9.0 (03/06/2024) -## Features +### Features * Support for SMTP configuration in the user management system. * Support for SSO and team management as features in the Enterprise tier. * Fully managed repository indexing using `--webserver`, with job history logging available in the web interface. -# v0.8.3 (02/06/2024) +## v0.8.3 (02/06/2024) -## Fixes and Improvements +### Fixes and Improvements * Ensure `~/.tabby/repositories` exists for tabby scheduler jobs: https://github.com/TabbyML/tabby/pull/1375 * Add cpu only binary `tabby-cpu` to docker distribution. -# v0.8.0 (02/02/2024) +## v0.8.0 (02/02/2024) -## Notice +### Notice * Due to format changes, re-executing `tabby scheduler --now` is required to ensure that `Code Browser` functions properly. -## Features +### Features * Introducing a preview release of the `Source Code Browser`, featuring visualization of code snippets utilized for code completion in RAG. * Added a Windows CPU binary distribution. * Added a Linux ROCm (AMD GPU) binary distribution. -## Fixes and Improvements +### Fixes and Improvements * Fixed an issue with cached permanent redirection in certain browsers (e.g., Chrome) when the `--webserver` flag is disabled. * Introduced the `TABBY_MODEL_CACHE_ROOT` environment variable to individually override the model cache directory. * The `/v1beta/chat/completions` API endpoint is now compatible with OpenAI's chat completion API. * Models from our official registry can now be referred to without the TabbyML prefix. Therefore, for the model TabbyML/CodeLlama-7B, you can simply refer to it as CodeLlama-7B everywhere. -# v0.7.0 (12/15/2023) +## v0.7.0 (12/15/2023) -## Features +### Features * Tabby now includes built-in user management and secure access, ensuring that it is only accessible to your team. * The `--webserver` flag is a new addition to `tabby serve` that enables secure access to the tabby server. When this flag is on, IDE extensions will need to provide an authorization token to access the instance. - Some functionalities that are bound to the webserver (e.g. playground) will also require the `--webserver` flag. -## Fixes and Improvements +### Fixes and Improvements * Fix https://github.com/TabbyML/tabby/issues/1036, events log should be written to dated json files. -# v0.6.0 (11/27/2023) +## v0.6.0 (11/27/2023) -## Features +### Features * Add distribution support (running completion / chat model on different process / machine). * Add conversation history in chat playground. * Add `/metrics` endpoint for prometheus metrics collection. -## Fixes and Improvements +### Fixes and Improvements * Fix the slow repository indexing due to constraint memory arena in tantivy index writer. * Make `--model` optional, so users can create a chat only instance. * Add `--parallelism` to control the throughput and VRAM usage: https://github.com/TabbyML/tabby/pull/727 -# v0.5.5 (11/09/2023) +## v0.5.5 (11/09/2023) -## Fixes and Improvements +### Fixes and Improvements -## Notice +### Notice * llama.cpp backend (CPU, Metal) now requires a redownload of gguf model due to upstream format changes: https://github.com/TabbyML/tabby/pull/645 https://github.com/ggerganov/llama.cpp/pull/3252 * Due to indexing format changes, the `~/.tabby/index` needs to be manually removed before any further runs of `tabby scheduler`. * `TABBY_REGISTRY` is replaced with `TABBY_DOWNLOAD_HOST` for the github based registry implementation. -## Features +### Features * Improved dashboard UI. -## Fixes and Improvements +### Fixes and Improvements * Cpu backend is switched to llama.cpp: https://github.com/TabbyML/tabby/pull/638 * add `server.completion_timeout` to control the code completion interface timeout: https://github.com/TabbyML/tabby/pull/637 @@ -119,29 +126,29 @@ * Tokenizer implementation is switched to llama.cpp, so tabby no longer need to download additional tokenizer file: https://github.com/TabbyML/tabby/pull/683 * Fix deadlock issue reported in https://github.com/TabbyML/tabby/issues/718 -# v0.4.0 (10/24/2023) +## v0.4.0 (10/24/2023) -## Features +### Features * Supports golang: https://github.com/TabbyML/tabby/issues/553 * Supports ruby: https://github.com/TabbyML/tabby/pull/597 * Supports using local directory for `Repository.git_url`: use `file:///path/to/repo` to specify a local directory. * A new UI design for webserver. -## Fixes and Improvements +### Fixes and Improvements * Improve snippets retrieval by dedup candidates to existing content + snippets: https://github.com/TabbyML/tabby/pull/582 -# v0.3.1 (10/21/2023) -## Fixes and improvements +## v0.3.1 (10/21/2023) +### Fixes and improvements * Fix GPU OOM issue caused the parallelism: https://github.com/TabbyML/tabby/issues/541, https://github.com/TabbyML/tabby/issues/587 * Fix git safe directory check in docker: https://github.com/TabbyML/tabby/issues/569 -# v0.3.0 (10/13/2023) +## v0.3.0 (10/13/2023) -## Features -### Retrieval-Augmented Code Completion Enabled by Default +### Features +#### Retrieval-Augmented Code Completion Enabled by Default The currently supported languages are: @@ -152,19 +159,19 @@ The currently supported languages are: A blog series detailing the technical aspects of Retrieval-Augmented Code Completion will be published soon. Stay tuned! -## Fixes and Improvements +### Fixes and Improvements * Fix [Issue #511](https://github.com/TabbyML/tabby/issues/511) by marking ggml models as optional. * Improve stop words handling by combining RegexSet into Regex for efficiency. -# v0.2.2 (10/09/2023) -## Fixes and improvements +## v0.2.2 (10/09/2023) +### Fixes and improvements * Fix a critical issue that might cause request dead locking in ctranslate2 backend (when loading is heavy) -# v0.2.1 (10/03/2023) -## Features -### Chat Model & Web Interface +## v0.2.1 (10/03/2023) +### Features +#### Chat Model & Web Interface We have introduced a new argument, `--chat-model`, which allows you to specify the model for the chat playground located at http://localhost:8080/playground @@ -174,18 +181,22 @@ To utilize this feature, use the following command in the terminal: tabby serve --device metal --model TabbyML/StarCoder-1B --chat-model TabbyML/Mistral-7B ``` -### ModelScope Model Registry +#### ModelScope Model Registry Mainland Chinese users have been facing challenges accessing Hugging Face due to various reasons. The Tabby team is actively working to address this issue by mirroring models to a hosting provider in mainland China called modelscope.cn. ```bash -# Download from the Modelscope registry +## Download from the Modelscope registry TABBY_REGISTRY=modelscope tabby download --model TabbyML/WizardCoder-1B ``` -## Fixes and improvements +### Fixes and improvements * Implemented more accurate UTF-8 incremental decoding in the [GitHub pull request](https://github.com/TabbyML/tabby/pull/491). * Fixed the stop words implementation by utilizing RegexSet to isolate the stop word group. * Improved model downloading logic; now Tabby will attempt to fetch the latest model version if there's a remote change, and the local cache key becomes stale. * set default num_replicas_per_device for ctranslate2 backend to increase parallelism. + + + +No releases yet, this file will be updated when generating your first release.