Skip to content

Commit 59e40ba

Browse files
authored
Update helm chart to be more easily configurable (#56)
* Update helm chart to be more easily configurable Signed-off-by: clux <sszynrae@gmail.com> * do not default generate servicemonitor Signed-off-by: clux <sszynrae@gmail.com> * doc update Signed-off-by: clux <sszynrae@gmail.com> * can't bump here unless we tag... Signed-off-by: clux <sszynrae@gmail.com> --------- Signed-off-by: clux <sszynrae@gmail.com>
1 parent 569af1b commit 59e40ba

File tree

6 files changed

+108
-41
lines changed

6 files changed

+108
-41
lines changed

README.md

Lines changed: 30 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -7,33 +7,44 @@ A rust kubernetes reference controller for a [`Document` resource](https://githu
77

88
The `Controller` object reconciles `Document` instances when changes to it are detected, writes to its `.status` object, creates associated events, and uses finalizers for guaranteed delete handling.
99

10-
## Requirements
11-
- A Kubernetes cluster / k3d instance
12-
- The [CRD](yaml/crd.yaml)
13-
- Opentelemetry collector (**optional**)
10+
## Installation
1411

15-
### Cluster
16-
As an example; get `k3d` then:
12+
### CRD
13+
Apply the CRD from [cached file](yaml/crd.yaml), or pipe it from `crdgen` to pickup schema changes:
1714

1815
```sh
19-
k3d cluster create --registry-create --servers 1 --agents 1 main
20-
k3d kubeconfig get --all > ~/.kube/k3d
21-
export KUBECONFIG="$HOME/.kube/k3d"
16+
cargo run --bin crdgen | kubectl apply -f -
2217
```
2318

24-
A default `k3d` setup is fastest for local dev due to its local registry.
19+
### Controller
2520

26-
### CRD
27-
Apply the CRD from [cached file](yaml/crd.yaml), or pipe it from `crdgen` (best if changing it):
21+
Install the controller via `helm` by setting your preferred settings. For defaults:
2822

2923
```sh
30-
cargo run --bin crdgen | kubectl apply -f -
24+
helm template charts/doc-controller | kubectl apply -f -
25+
kubectl wait --for=condition=available deploy/doc-controller --timeout=30s
26+
kubectl port-forward service/doc-controller 8080:80
3127
```
3228

3329
### Opentelemetry
34-
Setup an opentelemetry collector in your cluster. [Tempo](https://github.com/grafana/helm-charts/tree/main/charts/tempo) / [opentelemetry-operator](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-operator) / [grafana agent](https://github.com/grafana/helm-charts/tree/main/charts/agent-operator) should all work out of the box. If your collector does not support grpc otlp you need to change the exporter in [`main.rs`](./src/main.rs).
3530

36-
If you don't have a collector, you can build locally without the `telemetry` feature (`tilt up telemetry`), or pull images [without the `otel` tag](https://hub.docker.com/r/clux/controller/tags/).
31+
Build and run with `telemetry` feature, or configure it via `helm`:
32+
33+
```sh
34+
helm template charts/doc-controller --set tracing.enabled=true | kubectl apply -f -
35+
```
36+
37+
This requires an opentelemetry collector in your cluster. [Tempo](https://github.com/grafana/helm-charts/tree/main/charts/tempo) / [opentelemetry-operator](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-operator) / [grafana agent](https://github.com/grafana/helm-charts/tree/main/charts/agent-operator) should all work out of the box. If your collector does not support grpc otlp you need to change the exporter in [`telemetry.rs`](./src/telemetry.rs).
38+
39+
Note that the [images are pushed either with or without the telemetry feature](https://hub.docker.com/r/clux/controller/tags/) depending on whether the tag includes `otel`.
40+
41+
### Metrics
42+
43+
Metrics is available on `/metrics` and a `ServiceMonitor` is configurable from the chart:
44+
45+
```sh
46+
helm template charts/doc-controller --set serviceMonitor.enabled=true | kubectl apply -f -
47+
```
3748

3849
## Running
3950

@@ -43,22 +54,16 @@ If you don't have a collector, you can build locally without the `telemetry` fea
4354
cargo run
4455
```
4556

46-
or, with optional telemetry (change as per requirements):
57+
or, with optional telemetry:
4758

4859
```sh
4960
OPENTELEMETRY_ENDPOINT_URL=https://0.0.0.0:55680 RUST_LOG=info,kube=trace,controller=debug cargo run --features=telemetry
5061
```
5162

5263
### In-cluster
53-
Use either your locally built image or the one from dockerhub (using opentelemetry features by default). Edit the [deployment](./yaml/deployment.yaml)'s image tag appropriately, and then:
54-
55-
```sh
56-
kubectl apply -f yaml/deployment.yaml
57-
kubectl wait --for=condition=available deploy/doc-controller --timeout=20s
58-
kubectl port-forward service/doc-controller 8080:80
59-
```
64+
For prebuilt, edit the [chart values](./charts/doc-controller/values.yaml) or [snapshotted yaml](./yaml/deployment.yaml) and apply as you see fit (like above).
6065

61-
To build and deploy the image quickly, we recommend using [tilt](https://tilt.dev/), via `tilt up` instead.
66+
To develop by building and deploying the image quickly, we recommend using [tilt](https://tilt.dev/), via `tilt up` instead.
6267

6368
## Usage
6469
In either of the run scenarios, your app is listening on port `8080`, and it will observe `Document` events.
@@ -102,7 +107,7 @@ $ curl 0.0.0.0:8080/
102107
{"last_event":"2019-07-17T22:31:37.591320068Z"}
103108
```
104109

105-
The metrics will be scraped by prometheus if you setup a `PodMonitor` or `ServiceMonitor` for it.
110+
The metrics will be scraped by prometheus if you setup a`ServiceMonitor` for it.
106111

107112
### Events
108113
The example `reconciler` only checks the `.spec.hidden` bool. If it does, it updates the `.status` object to reflect whether or not the instance `is_hidden`. It also sends a Kubernetes event associated with the controller. It is visible at the bottom of `kubectl describe doc samuel`.

charts/doc-controller/templates/_helpers.tpl

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,10 @@ app.kubernetes.io/version: {{ .Values.image.tag | default .Chart.AppVersion | qu
1717
app: {{ include "controller.name" . }}
1818
{{- end }}
1919

20+
{{- define "controller.tag" -}}
21+
{{- if .Values.tracing.enabled }}
22+
{{- "otel-" }}{{ .Values.version | default .Chart.AppVersion }}
23+
{{- else }}
24+
{{- .Values.version | default .Chart.AppVersion }}
25+
{{- end }}
26+
{{- end }}

charts/doc-controller/templates/deployment.yaml

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ spec:
3030
{{- toYaml .Values.podSecurityContext | nindent 8 }}
3131
containers:
3232
- name: {{ .Chart.Name }}
33-
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
33+
image: {{ .Values.image.repository }}:{{ include "controller.tag" . }}
3434
imagePullPolicy: {{ .Values.image.pullPolicy }}
3535
securityContext:
3636
{{- toYaml .Values.securityContext | nindent 10 }}
@@ -41,11 +41,15 @@ spec:
4141
containerPort: 8080
4242
protocol: TCP
4343
env:
44-
# We are pointing to tempo or grafana tracing agent's otlp grpc receiver port
45-
- name: OPENTELEMETRY_ENDPOINT_URL
46-
value: "https://promstack-tempo.monitoring.svc.cluster.local:4317"
4744
- name: RUST_LOG
48-
value: "info,kube=debug,controller=debug"
45+
value: {{ .Values.logging.env_filter }}
46+
{{- if .Values.tracing.enabled }}
47+
- name: OPENTELEMETRY_ENDPOINT_URL
48+
value: {{ .Values.tracing.endpoint }}
49+
{{- end }}
50+
{{- with .Values.env }}
51+
{{- toYaml . | nindent 8 }}
52+
{{- end }}
4953
readinessProbe:
5054
httpGet:
5155
path: /health
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
{{- if .Values.serviceMonitor.enabled }}
2+
---
3+
apiVersion: monitoring.coreos.com/v1
4+
kind: ServiceMonitor
5+
metadata:
6+
name: {{ include "controller.fullname" . }}
7+
namespace: {{ .Values.namespace }}
8+
labels:
9+
{{- include "controller.labels" . | nindent 4 }}
10+
{{- with .Values.service.annotations }}
11+
annotations:
12+
{{- toYaml . | nindent 4 }}
13+
{{- end }}
14+
spec:
15+
endpoints:
16+
- port: http
17+
{{- with .Values.serviceMonitor.interval }}
18+
interval: {{ . }}
19+
{{- end }}
20+
{{- with .Values.serviceMonitor.scrapeTimeout }}
21+
scrapeTimeout: {{ . }}
22+
{{- end }}
23+
honorLabels: true
24+
path: {{ .Values.serviceMonitor.path }}
25+
scheme: {{ .Values.serviceMonitor.scheme }}
26+
{{- with .Values.serviceMonitor.relabelings }}
27+
relabelings:
28+
{{- toYaml . | nindent 6 }}
29+
{{- end }}
30+
{{- with .Values.serviceMonitor.metricRelabelings }}
31+
metricRelabelings:
32+
{{- toYaml . | nindent 6 }}
33+
{{- end }}
34+
jobLabel: {{ include "controller.fullname" . }}
35+
selector:
36+
matchLabels:
37+
app: {{ include "controller.fullname" . }}
38+
namespaceSelector:
39+
matchNames:
40+
- {{ .Values.namespace }}
41+
{{- with .Values.serviceMonitor.targetLabels }}
42+
targetLabels:
43+
{{- toYaml . | nindent 4 }}
44+
{{- end }}
45+
{{- end }}

charts/doc-controller/values.yaml

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,17 @@
11
replicaCount: 1
22
nameOverride: ""
33
namespace: "default"
4+
version: "" # pin a specific version
45

56
image:
67
repository: clux/controller
78
pullPolicy: IfNotPresent
8-
tag: ""
99

1010
imagePullSecrets: []
1111

1212
serviceAccount:
1313
annotations: {}
1414
podAnnotations: {}
15-
# prometheus.io/scrape: "true"
16-
# prometheus.io/port: "8080"
1715

1816
podSecurityContext: {}
1917
# fsGroup: 2000
@@ -25,6 +23,16 @@ securityContext: {}
2523
# runAsNonRoot: true
2624
# runAsUser: 1000
2725

26+
# Enable the feature-flagged opentelemetry trace layer pushing over grpc
27+
tracing:
28+
enabled: false # prefixes tag with otel
29+
endpoint: "https://promstack-tempo.monitoring.svc.cluster.local:4317"
30+
31+
logging:
32+
env_filter: info,kube=debug,controller=debug
33+
34+
env: []
35+
2836
service:
2937
type: ClusterIP
3038
port: 80
@@ -37,6 +45,7 @@ resources:
3745
cpu: 50m
3846
memory: 100Mi
3947

40-
# TODO: evar option for otel
41-
# TODO: how to select between otel and non otel?
42-
# TODO: metrics scraping?
48+
serviceMonitor:
49+
enabled: false
50+
path: /metrics
51+
scheme: http

yaml/deployment.yaml

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ spec:
8989
{}
9090
containers:
9191
- name: doc-controller
92-
image: "clux/controller:0.12.5"
92+
image: clux/controller:0.12.5
9393
imagePullPolicy: IfNotPresent
9494
securityContext:
9595
{}
@@ -105,11 +105,8 @@ spec:
105105
containerPort: 8080
106106
protocol: TCP
107107
env:
108-
# We are pointing to tempo or grafana tracing agent's otlp grpc receiver port
109-
- name: OPENTELEMETRY_ENDPOINT_URL
110-
value: "https://promstack-tempo.monitoring.svc.cluster.local:4317"
111108
- name: RUST_LOG
112-
value: "info,kube=debug,controller=debug"
109+
value: info,kube=debug,controller=debug
113110
readinessProbe:
114111
httpGet:
115112
path: /health

0 commit comments

Comments
 (0)