Skip to content

Commit 3b03502

Browse files
authored
docs(karpor): refine helm installation document (#598)
* docs: update installation guide structure and AI backend details - Change section headers from '###' to '##' for better hierarchy - Add detailed descriptions for AI backend options (OpenAI, Azure OpenAI, Hugging Face) - Ensure consistency in parameter descriptions across both English and Chinese versions These changes improve the readability and clarity of the installation guide, making it easier for users to understand the configuration options available for the AI backend. * docs: translate installation guide to Chinese - Translate all sections of the installation guide from English to Chinese - Update table headers and descriptions to match the Chinese language - Ensure consistency in terminology and formatting throughout the document * docs: update AI feature installation instructions - Rephrase instructions for enabling AI features for clarity - Add examples for different AI backends (OpenAI, Azure OpenAI, Hugging Face) - Restructure configuration examples to improve readability These changes aim to make the installation process for AI features more intuitive and provide better guidance for users configuring different AI backends. * docs: update installation guide for Deepseek AI backend - Change base URL from OpenAI to Deepseek - Update model name from gpt-3.5-turbo to deepseek-chat These changes reflect the switch from OpenAI to Deepseek as the AI backend service, ensuring the documentation is accurate and up-to-date.
1 parent 188d854 commit 3b03502

File tree

2 files changed

+107
-94
lines changed

2 files changed

+107
-94
lines changed

docs/karpor/1-getting-started/2-installation.md

Lines changed: 30 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -91,63 +91,62 @@ helm install karpor-release kusionstack/karpor --set registryProxy=docker.m.daoc
9191

9292
### Enable AI features
9393

94-
If you are trying to install Karpor with AI features, including natural language search and AI analyze, `ai-auth-token` and `ai-base-url` should be configured, e.g.:
94+
If you want to install Karpor with AI features, including natural language search and AI analysis, you should configure parameters such as `ai-auth-token`, `ai-base-url`, etc., for example:
9595

9696
```shell
97-
# At a minimum, server.ai.authToken and server.ai.baseUrl must be configured.
97+
# Minimal configuration, using OpenAI as the default AI backend
9898
helm install karpor-release kusionstack/karpor \
99-
--set server.ai.authToken=YOUR_AI_TOKEN \
100-
--set server.ai.baseUrl=https://api.openai.com/v1
99+
--set server.ai.authToken={YOUR_AI_TOKEN}
101100

102-
# server.ai.backend has default values `openai`, which can be overridden when necessary.
103-
# If the backend you are using is compatible with OpenAI, then there is no need to make
104-
# any changes here.
101+
# Example using Azure OpenAI
105102
helm install karpor-release kusionstack/karpor \
106-
--set server.ai.authToken=YOUR_AI_TOKEN \
107-
--set server.ai.baseUrl=https://api.openai.com/v1 \
108-
--set server.ai.backend=huggingface
103+
--set server.ai.authToken={YOUR_AI_TOKEN} \
104+
--set server.ai.baseUrl=https://{YOUR_RESOURCE_NAME}.openai.azure.com \
105+
--set server.ai.backend=azureopenai
109106

110-
# server.ai.model has default values `gpt-3.5-turbo`, which can be overridden when necessary.
107+
# Example using Hugging Face
111108
helm install karpor-release kusionstack/karpor \
112-
--set server.ai.authToken=YOUR_AI_TOKEN \
113-
--set server.ai.baseUrl=https://api.openai.com/v1 \
114-
--set server.ai.model=gpt-4o
109+
--set server.ai.authToken={YOUR_AI_TOKEN} \
110+
--set server.ai.model={YOUR_HUGGINGFACE_MODEL} \
111+
--set server.ai.backend=huggingface
115112

116-
# server.ai.topP and server.ai.temperature can also be manually modified.
113+
# Custom configuration
117114
helm install karpor-release kusionstack/karpor \
118-
--set server.ai.authToken=YOUR_AI_TOKEN \
119-
--set server.ai.baseUrl=https://api.openai.com/v1 \
120-
--set server.ai.topP=0.5 \
121-
--set server.ai.temperature=0.2
115+
--set server.ai.authToken={YOUR_AI_TOKEN} \
116+
--set server.ai.baseUrl=https://api.deepseek.com \
117+
--set server.ai.backend=openai \
118+
--set server.ai.model=deepseek-chat \
119+
--set server.ai.topP=0.5 \
120+
--set server.ai.temperature=0.2
122121
```
123122

124-
### Chart Parameters
123+
## Chart Parameters
125124

126125
The following table lists the configurable parameters of the chart and their default values.
127126

128-
#### General Parameters
127+
### General Parameters
129128

130129
| Key | Type | Default | Description |
131130
|-----|------|---------|-------------|
132131
| namespace | string | `"karpor"` | Which namespace to be deployed. |
133132
| namespaceEnabled | bool | `true` | Whether to generate namespace. |
134133
| registryProxy | string | `""` | Image registry proxy will be the prefix as all component image. |
135134

136-
#### Global Parameters
135+
### Global Parameters
137136

138137
| Key | Type | Default | Description |
139138
|-----|------|---------|-------------|
140139
| global.image.imagePullPolicy | string | `"IfNotPresent"` | Image pull policy to be applied to all Karpor components. |
141140

142-
#### Karpor Server
141+
### Karpor Server
143142

144143
The Karpor Server Component is main backend server. It itself is an `apiserver`, which also provides `/rest-api` to serve Dashboard.
145144

146145
| Key | Type | Default | Description |
147146
|-----|------|---------|-------------|
148147
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","temperature":1,"topP":1}` | AI configuration section. The AI analysis feature requires that [authToken, baseUrl] be assigned values. |
149-
| server.ai.authToken | string | `""` | Authentication token for accessing the AI service. |
150-
| server.ai.backend | string | `"openai"` | Backend service or platform that the AI model is hosted on. e.g., "openai". If the backend you are using is compatible with OpenAI, then there is no need to make any changes here. |
148+
| server.ai.authToken | string | `""` | Authentication token for accessing the AI service. |
149+
| server.ai.backend | string | `"openai"` | Backend service or platform that the AI model is hosted on. Available options: <br/>- `"openai"`: OpenAI API (default)<br/>- `"azureopenai"`: Azure OpenAI Service<br/>- `"huggingface"`: Hugging Face API<br/> If the backend you are using is compatible with OpenAI, then there is no need to make any changes here. |
151150
| server.ai.baseUrl | string | `""` | Base URL of the AI service. e.g., "https://api.openai.com/v1". |
152151
| server.ai.model | string | `"gpt-3.5-turbo"` | Name or identifier of the AI model to be used. e.g., "gpt-3.5-turbo". |
153152
| server.ai.temperature | float | `1` | Temperature parameter for the AI model. This controls the randomness of the output, where a higher value (e.g., 1.0) makes the output more random, and a lower value (e.g., 0.0) makes it more deterministic. |
@@ -161,7 +160,7 @@ The Karpor Server Component is main backend server. It itself is an `apiserver`,
161160
| server.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Resource limits and requests for the karpor server pods. |
162161
| server.serviceType | string | `"ClusterIP"` | Service type for the karpor server. The available type values list as ["ClusterIP"、"NodePort"、"LoadBalancer"]. |
163162

164-
#### Karpor Syncer
163+
### Karpor Syncer
165164

166165
The Karpor Syncer Component is independent server to synchronize cluster resources in real-time.
167166

@@ -174,7 +173,7 @@ The Karpor Syncer Component is independent server to synchronize cluster resourc
174173
| syncer.replicas | int | `1` | The number of karpor syncer pods to run. |
175174
| syncer.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Resource limits and requests for the karpor syncer pods. |
176175

177-
#### ElasticSearch
176+
### ElasticSearch
178177

179178
The ElasticSearch Component to store the synchronized resources and user data.
180179

@@ -187,7 +186,7 @@ The ElasticSearch Component to store the synchronized resources and user data.
187186
| elasticsearch.replicas | int | `1` | The number of ElasticSearch pods to run. |
188187
| elasticsearch.resources | object | `{"limits":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"},"requests":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"}}` | Resource limits and requests for the karpor elasticsearch pods. |
189188

190-
#### ETCD
189+
### ETCD
191190

192191
The ETCD Component is the storage of Karpor Server as `apiserver`.
193192

@@ -202,11 +201,13 @@ The ETCD Component is the storage of Karpor Server as `apiserver`.
202201
| etcd.replicas | int | `1` | The number of etcd pods to run. |
203202
| etcd.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Resource limits and requests for the karpor etcd pods. |
204203

205-
#### Job
204+
### Job
206205

207206
This one-time job is used to generate root certificates and some preliminary work.
208207

209208
| Key | Type | Default | Description |
210209
|-----|------|---------|-------------|
211210
| job.image.repo | string | `"kusionstack/karpor"` | Repository for the Job image. |
212211
| job.image.tag | string | `""` | Tag for Karpor image. Defaults to the chart's appVersion if not specified. |
212+
213+

i18n/zh/docusaurus-plugin-content-docs-karpor/current/1-getting-started/2-installation.md

Lines changed: 77 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -91,109 +91,121 @@ helm install karpor-release kusionstack/karpor --set registryProxy=docker.m.daoc
9191

9292
### 启用 AI 功能
9393

94-
如果您要安装带有AI功能的Karpor,包括自然语言搜索和AI分析,则应配置 `ai-auth-token``ai-base-url`,例如:
94+
如果您要安装带有 AI 功能的 Karpor,包括自然语言搜索和 AI 分析,则应配置 `ai-auth-token``ai-base-url` 等参数,例如:
9595

9696
```shell
97-
# 至少需要配置 server.ai.authToken 和 server.ai.baseUrl。
97+
# 最少配置,默认使用 OpenAI 作为 AI Backend
9898
helm install karpor-release kusionstack/karpor \
99-
--set server.ai.authToken=YOUR_AI_TOKEN \
100-
--set server.ai.baseUrl=https://api.openai.com/v1
101-
# server.ai.backend 的默认值是 `openai`,可以根据需要进行覆盖。如果你使用的后端与 OpenAI 兼容,则无需在此处进行任何更改。
99+
--set server.ai.authToken={YOUR_AI_TOKEN}
100+
101+
# 使用 Azure OpenAI 的样例
102102
helm install karpor-release kusionstack/karpor \
103-
--set server.ai.authToken=YOUR_AI_TOKEN \
104-
--set server.ai.baseUrl=https://api.openai.com/v1 \
105-
--set server.ai.backend=huggingface
106-
# server.ai.model 的默认值是 `gpt-3.5-turbo`,可以根据需要进行覆盖。
103+
--set server.ai.authToken={YOUR_AI_TOKEN} \
104+
--set server.ai.baseUrl=https://{YOUR_RESOURCE_NAME}.openai.azure.com \
105+
--set server.ai.backend=azureopenai
106+
107+
# 使用 Hugging Face 的样例
107108
helm install karpor-release kusionstack/karpor \
108-
--set server.ai.authToken=YOUR_AI_TOKEN \
109-
--set server.ai.baseUrl=https://api.openai.com/v1 \
110-
--set server.ai.model=gpt-4o
111-
# server.ai.topP 和 server.ai.temperature 也可以手动修改。
109+
--set server.ai.authToken={YOUR_AI_TOKEN} \
110+
--set server.ai.model={YOUR_HUGGINGFACE_MODEL} \
111+
--set server.ai.backend=huggingface
112+
113+
# 自定义配置
112114
helm install karpor-release kusionstack/karpor \
113-
--set server.ai.authToken=YOUR_AI_TOKEN \
114-
--set server.ai.baseUrl=https://api.openai.com/v1 \
115-
--set server.ai.topP=0.5 \
116-
--set server.ai.temperature=0.2
115+
--set server.ai.authToken={YOUR_AI_TOKEN} \
116+
--set server.ai.baseUrl=https://api.deepseek.com \
117+
--set server.ai.backend=openai \
118+
--set server.ai.model=deepseek-chat \
119+
--set server.ai.topP=0.5 \
120+
--set server.ai.temperature=0.2
117121
```
118122

119-
### Chart 参数
123+
## Chart 参数
120124

121125
以下表格列出了 Chart 的所有可配置参数及其默认值。
122126

123-
#### 通用参数
127+
### 通用参数
124128

125129
|| 类型 | 默认值 | 描述 |
126130
|-----|------|---------|-------------|
127-
| namespace | string | `"karpor"` | 部署的目标命名空间 |
128-
| namespaceEnabled | bool | `true` | 是否生成命名空间 |
129-
| registryProxy | string | `""` | 镜像代理地址,配置后将作为所有组件镜像的前缀。 比如,`golang:latest` 将替换为 `<registryProxy>/golang:latest` |
131+
| namespace | string | `"karpor"` | 部署的目标命名空间 |
132+
| namespaceEnabled | bool | `true` | 是否生成命名空间 |
133+
| registryProxy | string | `""` | 镜像仓库代理,将作为所有组件镜像的前缀。 |
130134

131-
#### 全局参数
135+
### 全局参数
132136

133137
|| 类型 | 默认值 | 描述 |
134138
|-----|------|---------|-------------|
135-
| global.image.imagePullPolicy | string | `"IfNotPresent"` | 应用于所有 Karpor 组件的镜像拉取策略 |
139+
| global.image.imagePullPolicy | string | `"IfNotPresent"` | 应用于所有 Karpor 组件的镜像拉取策略 |
136140

137-
#### Karpor Server
141+
### Karpor 服务端
138142

139-
Karpor Server 组件是主要的后端服务。它本身就是一个 `apiserver`也提供 `/rest-api` 来服务 Web UI
143+
Karpor 服务器组件是主要的后端服务器。它本身是一个 `apiserver`同时也提供 `/rest-api` 来服务仪表板。
140144

141145
|| 类型 | 默认值 | 描述 |
142146
|-----|------|---------|-------------|
143-
| server.image.repo | string | `"kusionstack/karpor"` | Karpor Server 镜像的仓库 |
144-
| server.image.tag | string | `""` | Karpor Server 镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
145-
| server.name | string | `"karpor-server"` | Karpor Server 的组件名称 |
146-
| server.port | int | `7443` | Karpor Server 的端口 |
147-
| server.replicas | int | `1` | 要运行的 Karpor Server pod 的数量 |
148-
| server.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Karpor Server pod 的资源规格 |
149-
| server.serviceType | string | `"ClusterIP"` | Karpor Server 的服务类型,可用的值为 ["ClusterIP"、"NodePort"、"LoadBalancer"] |
150-
151-
#### Karpor Syncer
152-
153-
Karpor Syncer 组件是独立的服务,用于实时同步集群资源。
147+
| server.ai | object | `{"authToken":"","backend":"openai","baseUrl":"","model":"gpt-3.5-turbo","temperature":1,"topP":1}` | AI 配置部分。AI 分析功能需要为 [authToken, baseUrl] 赋值。 |
148+
| server.ai.authToken | string | `""` | 访问 AI 服务的认证令牌。 |
149+
| server.ai.backend | string | `"openai"` | 托管 AI 模型的后端服务或平台。可用选项:<br/>- `"openai"`: OpenAI API(默认)<br/>- `"azureopenai"`: Azure OpenAI 服务<br/>- `"huggingface"`: Hugging Face API<br/>如果您使用的后端与 OpenAI 兼容,则无需在此处进行任何更改。 |
150+
| server.ai.baseUrl | string | `""` | AI 服务的基础 URL。例如:"https://api.openai.com/v1"。 |
151+
| server.ai.model | string | `"gpt-3.5-turbo"` | 要使用的 AI 模型的名称或标识符。例如:"gpt-3.5-turbo"。 |
152+
| server.ai.temperature | float | `1` | AI 模型的温度参数。控制输出的随机性,较高的值(例如 1.0)使输出更随机,较低的值(例如 0.0)使输出更确定性。 |
153+
| server.ai.topP | float | `1` | AI 模型的 Top-p(核采样)参数。控制采样的概率质量,较高的值导致生成内容的多样性更大(通常范围为 0 到 1)。 |
154+
| server.enableRbac | bool | `false` | 如果设置为 true,则启用 RBAC 授权。 |
155+
| server.image.repo | string | `"kusionstack/karpor"` | Karpor 服务器镜像的仓库。 |
156+
| server.image.tag | string | `""` | Karpor 服务器镜像的标签。如果未指定,则默认为 Chart 的 appVersion。 |
157+
| server.name | string | `"karpor-server"` | Karpor 服务器的组件名称。 |
158+
| server.port | int | `7443` | Karpor 服务器的端口。 |
159+
| server.replicas | int | `1` | 要运行的 Karpor 服务器 Pod 数量。 |
160+
| server.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Karpor 服务器 Pod 的资源限制和请求。 |
161+
| server.serviceType | string | `"ClusterIP"` | Karpor 服务器的服务类型。可用类型值为 ["ClusterIP"、"NodePort"、"LoadBalancer"]|
162+
163+
### Karpor 同步器
164+
165+
Karpor 同步器组件是一个独立的服务器,用于实时同步集群资源。
154166

155167
|| 类型 | 默认值 | 描述 |
156168
|-----|------|---------|-------------|
157-
| syncer.image.repo | string | `"kusionstack/karpor"` | Karpor Syncer 镜像的仓库 |
158-
| syncer.image.tag | string | `""` | Karpor Syncer 镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
159-
| syncer.name | string | `"karpor-syncer"` | karpor Syncer 的组件名称 |
160-
| syncer.port | int | `7443` | karpor Syncer 的端口 |
161-
| syncer.replicas | int | `1` | 要运行的 karpor Syncer pod 的数量 |
162-
| syncer.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | karpor Syncer pod 的资源规格 |
169+
| syncer.image.repo | string | `"kusionstack/karpor"` | Karpor 同步器镜像的仓库。 |
170+
| syncer.image.tag | string | `""` | Karpor 同步器镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
171+
| syncer.name | string | `"karpor-syncer"` | Karpor 同步器的组件名称。 |
172+
| syncer.port | int | `7443` | Karpor 同步器的端口。 |
173+
| syncer.replicas | int | `1` | 要运行的 Karpor 同步器 Pod 数量。 |
174+
| syncer.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Karpor 同步器 Pod 的资源限制和请求。 |
163175

164-
#### ElasticSearch
176+
### ElasticSearch
165177

166-
ElasticSearch 组件用于存储同步的资源和用户数据
178+
ElasticSearch 组件用于存储同步的资源数据和用户数据
167179

168180
|| 类型 | 默认值 | 描述 |
169181
|-----|------|---------|-------------|
170-
| elasticsearch.image.repo | string | `"docker.elastic.co/elasticsearch/elasticsearch"` | ElasticSearch 镜像的仓库 |
171-
| elasticsearch.image.tag | string | `"8.6.2"` | ElasticSearch 镜像的特定标签 |
172-
| elasticsearch.name | string | `"elasticsearch"` | ElasticSearch 的组件名称 |
173-
| elasticsearch.port | int | `9200` | ElasticSearch 的端口 |
174-
| elasticsearch.replicas | int | `1` | 要运行的 ElasticSearch pod 的数量 |
175-
| elasticsearch.resources | object | `{"limits":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"},"requests":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"}}` | karpor elasticsearch pod 的资源规格 |
182+
| elasticsearch.image.repo | string | `"docker.elastic.co/elasticsearch/elasticsearch"` | ElasticSearch 镜像的仓库 |
183+
| elasticsearch.image.tag | string | `"8.6.2"` | ElasticSearch 镜像的特定标签 |
184+
| elasticsearch.name | string | `"elasticsearch"` | ElasticSearch 的组件名称 |
185+
| elasticsearch.port | int | `9200` | ElasticSearch 的端口 |
186+
| elasticsearch.replicas | int | `1` | 要运行的 ElasticSearch Pod 数量。 |
187+
| elasticsearch.resources | object | `{"limits":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"},"requests":{"cpu":"2","ephemeral-storage":"10Gi","memory":"4Gi"}}` | Karpor ElasticSearch Pod 的资源限制和请求。 |
176188

177-
#### ETCD
189+
### ETCD
178190

179-
ETCD 组件是 Karpor Server 作为 `apiserver` 背后的存储
191+
ETCD 组件是 Karpor 服务器作为 `apiserver` 的存储
180192

181193
|| 类型 | 默认值 | 描述 |
182194
|-----|------|---------|-------------|
183-
| etcd.image.repo | string | `"quay.io/coreos/etcd"` | ETCD 镜像的仓库 |
184-
| etcd.image.tag | string | `"v3.5.11"` | ETCD 镜像的标签 |
185-
| etcd.name | string | `"etcd"` | ETCD 的组件名称 |
186-
| etcd.persistence.accessModes[0] | string | `"ReadWriteOnce"` | |
187-
| etcd.persistence.size | string | `"10Gi"` | |
188-
| etcd.port | int | `2379` | ETCD 的端口 |
189-
| etcd.replicas | int | `1` | 要运行的 etcd pod 的数量 |
190-
| etcd.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | karpor etcd pod 的资源规格 |
195+
| etcd.image.repo | string | `"quay.io/coreos/etcd"` | ETCD 镜像的仓库 |
196+
| etcd.image.tag | string | `"v3.5.11"` | ETCD 镜像的特定标签。 |
197+
| etcd.name | string | `"etcd"` | ETCD 的组件名称 |
198+
| etcd.persistence.accessModes | list | `["ReadWriteOnce"]` | 卷访问模式,ReadWriteOnce 表示单节点读写访问。 |
199+
| etcd.persistence.size | string | `"10Gi"` | ETCD 持久卷的大小。 |
200+
| etcd.port | int | `2379` | ETCD 的端口 |
201+
| etcd.replicas | int | `1` | 要运行的 ETCD Pod 数量。 |
202+
| etcd.resources | object | `{"limits":{"cpu":"500m","ephemeral-storage":"10Gi","memory":"1Gi"},"requests":{"cpu":"250m","ephemeral-storage":"2Gi","memory":"256Mi"}}` | Karpor ETCD Pod 的资源限制和请求。 |
191203

192-
#### Job
204+
### 任务
193205

194-
这是一个一次性 Kubernetes Job,用于生成根证书和一些前置工作。Karpor Server 和 Karpor Syncer 都需要依赖它完成才能正常启动
206+
此一次性任务用于生成根证书和一些准备工作
195207

196208
|| 类型 | 默认值 | 描述 |
197209
|-----|------|---------|-------------|
198-
| job.image.repo | string | `"kusionstack/karpor"` | Job 镜像的仓库 |
199-
| job.image.tag | string | `""` | Karpor 镜像的标签。如果未指定,则默认为 Chart 的 appVersion |
210+
| job.image.repo | string | `"kusionstack/karpor"` | 任务镜像的仓库。 |
211+
| job.image.tag | string | `""` | Karpor 镜像的标签。如果未指定,则默认为 Chart 的 appVersion |

0 commit comments

Comments
 (0)