chore: Update Copilot configuration files

- Update Copilot Dockerfile.tpl to remove unnecessary code and fix formatting.
- Update config.example.env and Copilot/.env.example to correct the LLM server URL.
- Remove unused code and update documentation in Copilot/README.md.
- Update navigation links in App/FeatureSet/Docs/Utils/Nav.ts to include Copilot documentation.
- Update Copilot/Config.ts and Copilot/Utils/Init.ts to use the new LLM server URL.
- Add logger statements in Copilot/Service/CopilotActions/CopilotActionsBase.ts to log file content.
This commit is contained in:
Simon Larsen 2024-07-10 18:19:09 +01:00
parent 04650f165f
commit d8c8a76c1d
No known key found for this signature in database
GPG Key ID: 96C5DCA24769DBCA
13 changed files with 223 additions and 22 deletions

View File

@ -35,4 +35,4 @@ jobs:
-e ONEUPTIME_REPOSITORY_SECRET_KEY=${{ secrets.COPILOT_ONEUPTIME_REPOSITORY_SECRET_KEY }} \
-e CODE_REPOSITORY_PASSWORD=${{ github.token }} \
-e CODE_REPOSITORY_USERNAME='simlarsen' \
-e ONEUPTIME_LLAMA_SERVER_URL='http://57.128.112.160:8547'
-e ONEUPTIME_LLM_SERVER_URL='http://57.128.112.160:8547'

View File

@ -0,0 +1,55 @@
## Deploy LLM Server
This step is optional. You need to deploy LLM Server only if you want to use Copilot with LLM Server on your infrastructure for data privacy reasons. If you are comfortable with OpenAI's privacy policy, you can skip this step and use OpenAI directly.
### Pre-requisites
Before you deploy LLM Server, you need to make sure you have the following:
- **Docker**: You need to have Docker installed on your machine.
- **Docker Compose**: You need to have Docker Compose installed on your machine.
- **System Requirements**: You need to have at least 64 GB of RAM, 32 GB GPU (compitable with CUDA & Docker), 8 CPU cores, and 100 GB of disk space. You could get away with less resources, but we recommend the above configuration for optimal performance.
- **GPU is accessible by Docker**: You need to make sure that the GPU is accessible by Docker. Please read this [guide](https://docs.docker.com/compose/gpu-support/) for more information.
- **OneUptime Server URL**: You need to have the URL of OneUptime Server. If you are using SaaS service its `https://oneuptime.com`. If you're self-hosting OneUptime, you need to have the URL of the self-hosted OneUptime Server.
### Installation
To deploy LLM Server, you need to follow the following steps with docker-compose:
Create a `docker-compose.yml` file with the following content:
```yaml
llm:
extends:
file: ./docker-compose.base.yml
service: llm
ports:
- '8547:8547'
image: 'oneuptime/llm:release'
environment:
ONEUPTIME_URL: 'https://oneuptime.com'
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
```
Run the following command to start the LLM Server:
```bash
docker-compose up -d
```
You can now access the LLM Server at `http://localhost:8547`.
### TLS/SSL Configuration
You can set up TLS/SSL by having a reverse proxy in front of the LLM Server. This is recommended for production deployments and is beyond the scope of this document.
### Public Access
Please make sure this server is publicly accessible. So, it can be accessed by Copilot.

View File

@ -0,0 +1,145 @@
## OneUptime Copilot
OneUptime Copilot is a tool that helps you improve your codebase automatically. Copilot can fix following issues automatically:
- **Performance Issues**: Improve database queries, optimize code, reduce memory usage, decrease API response time, etc.
- **Security Issues**: Fix security vulnerabilities, prevent SQL injection, XSS, CSRF, etc.
- **Code Quality Issues**: Improve code readability, maintainability, and scalability. Improve comments, naming conventions, refactor code, etc.
- **Error Handling Issues**: Improve error handling, exception handling, logging, etc.
- **Testing Issues**: Improve test coverage, test quality, test performance, etc.
- **Documentation Issues**: Improve documentation quality, comments, README, etc.
### Architecture
Copilot can be installed as a CI/CD tool and can be run on every merge to master / main branch. Copilot can also be scheduled to run as a cron on the CI/CD pipeline. We recommend you run Copilot atleast once/day.
There are three services when running copilot:
- **OneUptime**: You need to deploy or use OneUptime Cloud (https://oneuptime.com) to run Copilot. When you deploy OneUptime, url should be publicily accessible.
- **Copilot**: Copilot is the main service that runs the Copilot engine. Copilot engine is responsible for analyzing the codebase and fixing issues.
- **LLM Server** (Optional): Copilot sends your code to LLM Server to analyze and fix issues. The source-code and docker-image is [open-source](https://github.com/OneUptime/oneuptime/tree/master/LLM) and can be found at [Docker Hub](https://hub.docker.com/r/oneuptime/llm). This can be self-deployed if you want to run Copilot on-premises or you can use the hosted version.
### FAQ
##### Is my code sent to OneUptime?
No, your code is not sent to OneUptime. Copilot runs on your CI/CD pipeline and sends the code to LLM Server for analysis. LLM Server can be self-hosted.
##### Is my code sent to Self-Hosted LLM Server?
Yes, but you can self host LLM server so code is not sent outside of your infrastructure. Your code is sent to LLM Server for analysis. LLM Server is responsible for analyzing the code and fixing issues.
##### Is my code sent to any third-party?
No. We strictly do not send any telemetry data or code to any third-party.
##### Is my code sent to OpenAI?
No, If you host LLM Server yourself.
Yes, if you choose to use OpenAI by setting `OPENAI_API_KEY` and `OPENAI_MODEL`. We recommend you to use OpenAI only if you are comfortable with OpenAI's privacy policy. We're not responsible for any data sent to OpenAI or how your code is analyzed / used by OpenAI.
### Pre-requisites
Before you install Copilot, you need to make sure you have the following:
- **OneUptime Account**: You need to have a OneUptime account to use Copilot. You can sign up for a free account at [OneUptime](https://oneuptime.com). You can either use OneUptime Cloud or deploy OneUptime on your infrastructure.
- **GitHub Account**: You need to have a GitHub account to use Copilot. You can sign up for a free account at [GitHub](https://github.com). You can also use GitLab, Bitbucket, etc.
You also need either of the following:
- **LLM Server** (Optional): You need to have LLM Server to run Copilot. [Please check this guide to deploy LLM Server](https://oneuptime.com/docs/copilot/deploy-llm-server).
or
- **OpenAI** (Optional): You need to have OpenAI API Key and Model to run Copilot. Please check env vars for more information.
### Installation
To install Copilot, you need to follow the following steps:
#### Environment Variables
You need to set the following environment variables to run Copilot:
**Required Environment Variables**:
- **ONEUPTIME_REPOSITORY_SECRET_KEY**: The secret key of the repository. You can get this key from OneUptime Dashboard -> Reliability Copilot -> View Repository. If you don't have a repository, you can create a new repository, then click on "View Repository" to get the secret key.
- **CODE_REPOSITORY_USERNAME**: OneUptime uses this username to commit and push changes to GitHub / GitLab / etc. This should be the username of the existing user on GitHub that has access to the repository.
- **CODE_REPOSITORY_PASSWORD**: OneUptime uses this password to commit and push changes to GitHub / GitLab / etc. This should be the password of the existing user on GitHub that has access to the repository. You can also use Personal Access Tokens instead of Password. Please make sure the token has write permissions to the repo.
**Optional Environment Variables**:
- **ONEUPTIME_URL**: The URL of OneUptime Cloud. If left empty, Copilot will default to `https://oneuptime.com`.
If you are using LLM Server, you need to set the following environment variables:
- **ONEUPTIME_LLM_SERVER_URL**: The URL of LLM Server. (For example: https://your-llm-server.com:8547)
If you are using OpenAI, you need to set the following environment variables:
- **OPENAI_API_KEY**: The API key of OpenAI. You can get this key from OpenAI Dashboard.
- **OPENAI_MODEL**: The model of OpenAI. You can get this model from OpenAI Dashboard.
**Important**: You need to provide either `ONEUPTIME_LLM_SERVER_URL` or `OPENAI_API_KEY` and `OPENAI_MODEL` in order to use Copilot.
#### GitHub Actions
You can use GitHub Actions to run Copilot on every merge to master / main branch.
```yaml
name: "OneUptime Reliability Copilot"
on:
push:
# change this to main if you are using main branch.
branches: [ master ]
schedule:
# Run every day at midnight UTC
- cron: '0 0 * * *'
jobs:
analyze:
name: Analyze Code
runs-on: ubuntu-latest
env:
CI_PIPELINE_ID: ${{github.run_number}}
steps:
# Run Reliability Copilot in Doker Container
- name: Run Copilot
run: |
docker run --rm oneuptime/copilot:release \
-e CODE_REPOSITORY_PASSWORD='<YOUR_GITHUB_PASSWORD>' \ # Required. Please make sure to use GitHub secrets.
-e CODE_REPOSITORY_USERNAME='<YOUR_GITHUB_USERNAME>' \ # Required.
-e ONEUPTIME_URL='https://oneuptime.com' \ # Optional. Leave empty to use OneUptime Cloud.
-e ONEUPTIME_REPOSITORY_SECRET_KEY='<ONEUPTIME_REPOSITORY_SECRET_KEY>' \ # Required. Please make sure to use GitHub secrets.
-e ONEUPTIME_LLM_SERVER_URL='<YOUR_ONEUPTIME_LLM_SERVER>' \ # Optional. Leave empty to use OneUptime LLM Server.
-e OPENAI_API_KEY='<YOUR_OPENAI_API_KEY>' \ # Optional. Leave empty to not use OpenAI.
-e OPENAI_MODEL='<YOUR_OPENAI_MODEL>' # Optional. Leave empty to not use OpenAI.
```
#### Docker
You can also run Copilot using docker. You can run this in any CI/CD of your choice.
```bash
docker run --rm oneuptime/copilot:release \
-e CODE_REPOSITORY_PASSWORD='<YOUR_GITHUB_PASSWORD>' \ # Required. Please make sure to use GitHub secrets.
-e CODE_REPOSITORY_USERNAME='<YOUR_GITHUB_USERNAME>' \ # Required.
-e ONEUPTIME_URL='https://oneuptime.com' \ # Optional. Leave empty to use OneUptime Cloud.
-e ONEUPTIME_REPOSITORY_SECRET_KEY='<ONEUPTIME_REPOSITORY_SECRET_KEY>' \ # Required. Please make sure to use GitHub secrets.
-e ONEUPTIME_LLM_SERVER_URL='<YOUR_ONEUPTIME_LLM_SERVER>' \ # Optional. Leave empty to use OneUptime LLM Server.
-e OPENAI_API_KEY='<YOUR_OPENAI_API_KEY>' \ # Optional. Leave empty to not use OpenAI.
-e OPENAI_MODEL='<YOUR_OPENAI_MODEL>' # Optional. This can be for ex `gpt-4o` Leave empty to not use OpenAI.
```
### Support
If you have any questions or need help, please contact us at support@oneuptime.com

View File

@ -72,6 +72,13 @@ const DocsNav: NavGroup[] = [
{ title: "Fluentd", url: "/docs/telemetry/fluentd" },
],
},
{
title: "Copilot",
links: [
{ title: "Installation", url: "/docs/copilot/introduction" },
{ title: "Deploy LLM Server", url: "/docs/copilot/deploy-llm-server" },
],
},
];
// Export the array of navigation groups

View File

@ -3,4 +3,4 @@ ONEUPTIME_REPOSITORY_SECRET_KEY=your-repository-secret-key
CODE_REPOSITORY_PASSWORD=
CODE_REPOSITORY_USERNAME=
# Optional. If this is left blank then this url will be ONEUPTIME_URL/llama
ONEUPTIME_LLAMA_SERVER_URL=
ONEUPTIME_LLM_SERVER_URL=

View File

@ -40,10 +40,10 @@ export const GetCodeRepositoryUsername: GetStringOrNullFunction = ():
return username;
};
export const GetLlamaServerUrl: GetURLFunction = () => {
export const GetLlmServerUrl: GetURLFunction = () => {
return URL.fromString(
process.env["ONEUPTIME_LLAMA_SERVER_URL"] ||
GetOneUptimeURL().addRoute("/llama").toString(),
process.env["ONEUPTIME_LLM_SERVER_URL"] ||
GetOneUptimeURL().addRoute("/llm").toString(),
);
};

View File

@ -85,5 +85,4 @@ COPY ./Copilot /usr/src/app
RUN npm run compile
#Run the app
CMD [ "npm", "start" ]
{{ end }}
{{ end }}

View File

@ -2,13 +2,5 @@
Copilot is a tool that helps you improve your codebase automatically.
## Run Copilot with Docker
Please refer to the [official documentation](/App/FeatureSet/Docs/Content/copilot) for more information.
```bash
docker run -v $(pwd):/repository -w /repository oneuptime/copilot
```
### Volumes
- `/repository` - The directory where your codebase is located.

View File

@ -255,6 +255,8 @@ If you have any feedback or suggestions, please let us know. We would love to h
processResult.result.files[filePath]!.gitCommitHash;
logger.info(`Writing file: ${filePath} ${fileCommitHash}`);
logger.info(`File content: `);
logger.info(`${processResult.result.files[filePath]!.fileContent}`);
const code: string = processResult.result.files[filePath]!.fileContent;

View File

@ -1,5 +1,5 @@
import URL from "Common/Types/API/URL";
import { GetLlamaServerUrl, GetRepositorySecretKey } from "../../Config";
import { GetLlmServerUrl, GetRepositorySecretKey } from "../../Config";
import LlmBase, { CopilotPromptResult } from "./LLMBase";
import API from "Common/Utils/API";
import HTTPErrorResponse from "Common/Types/API/HTTPErrorResponse";
@ -27,7 +27,7 @@ export default class Llama extends LlmBase {
public static override async getResponse(
data: CopilotActionPrompt,
): Promise<CopilotPromptResult> {
const serverUrl: URL = GetLlamaServerUrl();
const serverUrl: URL = GetLlmServerUrl();
const response: HTTPErrorResponse | HTTPResponse<JSONObject> =
await API.post(

View File

@ -1,6 +1,6 @@
import {
GetCodeRepositoryPassword,
GetLlamaServerUrl,
GetLlmServerUrl,
GetLlmType,
GetRepositorySecretKey,
} from "../Config";
@ -16,7 +16,7 @@ import { JSONObject } from "Common/Types/JSON";
export default class InitUtil {
public static async init(): Promise<CodeRepositoryResult> {
const llamaServerUrl: URL = GetLlamaServerUrl();
const llamaServerUrl: URL = GetLlmServerUrl();
if (GetLlmType() === LlmType.Llama) {
// check status of llama server

View File

@ -249,4 +249,5 @@ COPILOT_ONEUPTIME_URL=http://localhost
COPILOT_ONEUPTIME_REPOSITORY_SECRET_KEY=
COPILOT_CODE_REPOSITORY_PASSWORD=
COPILOT_CODE_REPOSITORY_USERNAME=
COPILOT_ONEUPTIME_LLAMA_SERVER_URL=
COPILOT_ONEUPTIME_LLM_SERVER_URL=
DISABLE_COPILOT=true # Set this to false if you want to enable copilot.

View File

@ -374,7 +374,7 @@ services:
ONEUPTIME_REPOSITORY_SECRET_KEY: ${COPILOT_ONEUPTIME_REPOSITORY_SECRET_KEY}
CODE_REPOSITORY_PASSWORD: ${COPILOT_CODE_REPOSITORY_PASSWORD}
CODE_REPOSITORY_USERNAME: ${COPILOT_CODE_REPOSITORY_USERNAME}
ONEUPTIME_LLAMA_SERVER_URL: ${COPILOT_ONEUPTIME_LLAMA_SERVER_URL}
ONEUPTIME_LLM_SERVER_URL: ${COPILOT_ONEUPTIME_LLM_SERVER_URL}
DISABLE_COPILOT: ${DISABLE_COPILOT}
logging:
driver: "local"