tabby/crates/llama-cpp-server
Wei Zhang 779f785827
feat(download): allow fetching model files with multiple partitions (#3258)
* finish main logic

* add ut

* [autofix.ci] apply automated fixes

* feat: use indexed model name

* chore: apply review from meng

* chore: revert unnecessary downloader change

* chore: fix ut

* chore: donwload one file each time

* [autofix.ci] apply automated fixes

* chore: fix ut

* chore: fix review from meng

* chore: fix ci

* chore: revert multibar

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* chore: filter download address tests should be serial

* chore: use the workspace dep for serial_test

---------

Co-authored-by: leili <lilei@deeproute.ai>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2024-10-21 13:34:03 +08:00
..
llama.cpp@5ef07e25ac chore(llama.cpp): bump version to b3571 (#2851) 2024-08-12 13:54:44 -07:00
src feat(download): allow fetching model files with multiple partitions (#3258) 2024-10-21 13:34:03 +08:00
build.rs fix(build): disable GGML_NATIVE explicitly (#3118) 2024-09-10 14:27:22 -07:00
Cargo.toml refactor(webserver): switch to openai chat interface (#2564) 2024-07-03 15:44:34 +09:00