Skip to content

Commit eb2c2d0

Browse files
authored
docs(karpor): add api proxy parameters in helm installation document (#599)
1 parent 3b03502 commit eb2c2d0

File tree

2 files changed

+28
-2
lines changed

2 files changed

+28
-2
lines changed

docs/karpor/1-getting-started/2-installation.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,14 @@ helm install karpor-release kusionstack/karpor \
118118
--set server.ai.model=deepseek-chat \
119119
--set server.ai.topP=0.5 \
120120
--set server.ai.temperature=0.2
121+
122+
# Example using AI Proxy
123+
helm install karpor kusionstack/karpor \
124+
--set server.ai.authToken={YOUR_AI_TOKEN} \
125+
--set server.ai.proxy.enabled=true \
126+
--set server.ai.proxy.httpProxy={YOUR_HTTP_PROXY} \
127+
--set server.ai.proxy.httpsProxy={YOUR_HTTPS_PROXY} \
128+
--set server.ai.proxy.noProxy={YOUR_NO_PROXY}
121129
```
122130

123131
## Chart Parameters
@@ -144,11 +152,16 @@ The Karpor Server Component is main backend server. It itself is an `apiserver`,
144152

145153
| Key | Type | Default | Description |
146154
|-----|------|---------|-------------|
147-
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","temperature":1,"topP":1}` | AI configuration section. The AI analysis feature requires that [authToken, baseUrl] be assigned values. |
155+
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","proxy":{"enabled":false,"httpProxy":"","httpsProxy":"","noProxy":""},"temperature":1,"topP":1}` | AI configuration section. The AI analysis feature requires that [authToken, baseUrl] be assigned values. |
148156
| server.ai.authToken | string | `""` | Authentication token for accessing the AI service. |
149157
| server.ai.backend | string | `"openai"` | Backend service or platform that the AI model is hosted on. Available options: <br/>- `"openai"`: OpenAI API (default)<br/>- `"azureopenai"`: Azure OpenAI Service<br/>- `"huggingface"`: Hugging Face API<br/> If the backend you are using is compatible with OpenAI, then there is no need to make any changes here. |
150158
| server.ai.baseUrl | string | `""` | Base URL of the AI service. e.g., "https://api.openai.com/v1". |
151159
| server.ai.model | string | `"gpt-3.5-turbo"` | Name or identifier of the AI model to be used. e.g., "gpt-3.5-turbo". |
160+
| server.ai.proxy | object | `{"enabled":false,"httpProxy":"","httpsProxy":"","noProxy":""}` | Proxy configuration for AI service connections. |
161+
| server.ai.proxy.enabled | bool | `false` | Enable proxy settings for AI service connections. When false, proxy settings will be ignored. |
162+
| server.ai.proxy.httpProxy | string | `""` | HTTP proxy URL for AI service connections (e.g., "http://proxy.example.com:8080"). |
163+
| server.ai.proxy.httpsProxy | string | `""` | HTTPS proxy URL for AI service connections (e.g., "https://proxy.example.com:8080"). |
164+
| server.ai.proxy.noProxy | string | `""` | No proxy configuration for AI service connections (e.g., "localhost,127.0.0.1,example.com"). |
152165
| server.ai.temperature | float | `1` | Temperature parameter for the AI model. This controls the randomness of the output, where a higher value (e.g., 1.0) makes the output more random, and a lower value (e.g., 0.0) makes it more deterministic. |
153166
| server.ai.topP | float | `1` | Top-p (nucleus sampling) parameter for the AI model. This controls Controls the probability mass to consider for sampling, where a higher value leads to greater diversity in the generated content (typically ranging from 0 to 1) |
154167
| server.enableRbac | bool | `false` | Enable RBAC authorization if set to true. |

i18n/zh/docusaurus-plugin-content-docs-karpor/current/1-getting-started/2-installation.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,14 @@ helm install karpor-release kusionstack/karpor \
118118
--set server.ai.model=deepseek-chat \
119119
--set server.ai.topP=0.5 \
120120
--set server.ai.temperature=0.2
121+
122+
# 使用 AI Proxy 的样例
123+
helm install karpor kusionstack/karpor \
124+
--set server.ai.authToken={YOUR_AI_TOKEN} \
125+
--set server.ai.proxy.enabled=true \
126+
--set server.ai.proxy.httpProxy={YOUR_HTTP_PROXY} \
127+
--set server.ai.proxy.httpsProxy={YOUR_HTTPS_PROXY} \
128+
--set server.ai.proxy.noProxy={YOUR_NO_PROXY}
121129
```
122130

123131
## Chart 参数
@@ -144,11 +152,16 @@ Karpor 服务器组件是主要的后端服务器。它本身是一个 `apiserve
144152

145153
|| 类型 | 默认值 | 描述 |
146154
|-----|------|---------|-------------|
147-
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","temperature":1,"topP":1}` | AI 配置部分。AI 分析功能需要为 [authToken, baseUrl] 赋值。 |
155+
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","proxy":{"enabled":false,"httpProxy":"","httpsProxy":"","noProxy":""},"temperature":1,"topP":1}` | AI 配置部分。AI 分析功能需要为 [authToken, baseUrl] 赋值。 |
148156
| server.ai.authToken | string | `""` | 访问 AI 服务的认证令牌。 |
149157
| server.ai.backend | string | `"openai"` | 托管 AI 模型的后端服务或平台。可用选项:<br/>- `"openai"`: OpenAI API(默认)<br/>- `"azureopenai"`: Azure OpenAI 服务<br/>- `"huggingface"`: Hugging Face API<br/>如果您使用的后端与 OpenAI 兼容,则无需在此处进行任何更改。 |
150158
| server.ai.baseUrl | string | `""` | AI 服务的基础 URL。例如:"https://api.openai.com/v1"。 |
151159
| server.ai.model | string | `"gpt-3.5-turbo"` | 要使用的 AI 模型的名称或标识符。例如:"gpt-3.5-turbo"。 |
160+
| server.ai.proxy | object | `{"enabled":false,"httpProxy":"","httpsProxy":"","noProxy":""}` | AI 服务连接的代理配置。 |
161+
| server.ai.proxy.enabled | bool | `false` | 启用 AI 服务连接的代理设置。如果为 false,则将忽略代理设置。 |
162+
| server.ai.proxy.httpProxy | string | `""` | AI 服务连接的 HTTP 代理 URL(例如“http://proxy.example.com:8080”)。 |
163+
| server.ai.proxy.httpsProxy | string | `""` | AI 服务连接的 HTTPS 代理 URL(例如“https://proxy.example.com:8080”)。 |
164+
| server.ai.proxy.noProxy | string | `""` | 不需要通过代理服务器进行访问的域名(例如“localhost,127.0.0.1,example.com”)。|
152165
| server.ai.temperature | float | `1` | AI 模型的温度参数。控制输出的随机性,较高的值(例如 1.0)使输出更随机,较低的值(例如 0.0)使输出更确定性。 |
153166
| server.ai.topP | float | `1` | AI 模型的 Top-p(核采样)参数。控制采样的概率质量,较高的值导致生成内容的多样性更大(通常范围为 0 到 1)。 |
154167
| server.enableRbac | bool | `false` | 如果设置为 true,则启用 RBAC 授权。 |

0 commit comments

Comments
 (0)