Skip to content

Commit 22f1f9e

Browse files
authored
Merge pull request #102355 from dfitzmau/OSDOCS-17072-batch5-18
[enterprise-4.18] OSDOCS-17072-batch5
2 parents fd2dbbc + ecfdb7f commit 22f1f9e

10 files changed

+144
-50
lines changed

modules/modifying-an-existing-ingress-controller.adoc

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,12 +16,14 @@ As a cluster administrator, you can modify an existing Ingress Controller to man
1616
.Procedure
1717

1818
. Modify the chosen `IngressController` to set `dnsManagementPolicy`:
19-
2019
+
2120
[source,terminal]
2221
----
2322
SCOPE=$(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}")
24-
23+
----
24+
+
25+
[source,terminal]
26+
----
2527
oc -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"dnsManagementPolicy":"Unmanaged", "scope":"${SCOPE}"}}}}'
2628
----
2729

modules/monitoring-enabling-query-logging-for-thanos-querier.adoc

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ data:
4949
----
5050
<1> Set the value to `true` to enable logging and `false` to disable logging. The default value is `false`.
5151
<2> Set the value to `debug`, `info`, `warn`, or `error`. If no value exists for `logLevel`, the log level defaults to `error`.
52-
+
52+
5353
. Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed.
5454
5555
.Verification
@@ -60,14 +60,19 @@ data:
6060
----
6161
$ oc -n openshift-monitoring get pods
6262
----
63-
+
63+
6464
. Run a test query using the following sample commands as a model:
6565
+
6666
[source,terminal]
6767
----
6868
$ token=`oc create token prometheus-k8s -n openshift-monitoring`
69+
----
70+
+
71+
[source,terminal]
72+
----
6973
$ oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H "Authorization: Bearer $token" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'
7074
----
75+
7176
. Run the following command to read the query log:
7277
+
7378
[source,terminal]
@@ -79,6 +84,6 @@ $ oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query
7984
====
8085
Because the `thanos-querier` pods are highly available (HA) pods, you might be able to see logs in only one pod.
8186
====
82-
+
87+
8388
. After you examine the logged query information, disable query logging by changing the `enableRequestLogging` value to `false` in the config map.
8489

modules/nodes-cluster-worker-latency-profiles-examining.adoc

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,6 @@
22
//
33
// scalability_and_performance/scaling-worker-latency-profiles.adoc
44

5-
65
:_mod-docs-content-type: PROCEDURE
76
[id="nodes-cluster-worker-latency-profiles-examining_{context}"]
87
= Example steps for displaying resulting values of workerLatencyProfile
@@ -46,14 +45,22 @@ node-monitor-grace-period:
4645
[source,terminal]
4746
----
4847
$ oc debug node/<worker-node-name>
48+
----
49+
+
50+
[source,terminal]
51+
----
4952
$ chroot /host
53+
----
54+
+
55+
[source,terminal]
56+
----
5057
# cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency
5158
----
5259
+
5360
.Example output
5461
[source,terminal]
5562
----
56-
“nodeStatusUpdateFrequency”: “10s”
63+
“nodeStatusUpdateFrequency”: “10s”
5764
----
5865

59-
These outputs validate the set of timing variables for the Worker Latency Profile.
66+
These outputs validate the set of timing variables for the Worker Latency Profile.

modules/nodes-containers-dev-fuse-configuring.adoc

Lines changed: 23 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ By exposing the `/dev/fuse` device to an unprivileged pod, you grant it the capa
1212

1313
. Define the pod with `/dev/fuse` access:
1414
+
15-
* Create a YAML file named `fuse-builder-pod.yaml` with the following content:
15+
.. Create a YAML file named `fuse-builder-pod.yaml` with the following content:
1616
+
1717
[source,yaml]
1818
----
@@ -21,29 +21,30 @@ kind: Pod
2121
metadata:
2222
name: fuse-builder-pod
2323
annotations:
24-
io.kubernetes.cri-o.Devices: "/dev/fuse" <1>
24+
io.kubernetes.cri-o.Devices: "/dev/fuse"
2525
spec:
2626
containers:
2727
- name: build-container
28-
image: quay.io/podman/stable <2>
28+
image: quay.io/podman/stable
2929
command: ["/bin/sh", "-c"]
30-
args: ["echo 'Container is running. Use oc exec to get a shell.'; sleep infinity"] <3>
31-
securityContext: <4>
30+
args: ["echo 'Container is running. Use oc exec to get a shell.'; sleep infinity"]
31+
securityContext:
3232
runAsUser: 1000
3333
----
3434
+
35-
<1> The `io.kubernetes.cri-o.Devices: "/dev/fuse"` annotation makes the FUSE device available.
36-
<2> This annotation specifies a container that uses an image that includes `podman` (for example, `quay.io/podman/stable`).
37-
<3> This command keeps the container running so you can `exec` into it.
38-
<4> This annotation specifies a `securityContext` that runs the container as an unprivileged user (for example, `runAsUser: 1000`).
39-
*
35+
where:
36+
+
37+
`io.kubernetes.cri-o.Devices`:: The `io.kubernetes.cri-o.Devices: "/dev/fuse"` annotation makes the FUSE device available.
38+
`image`:: This annotation specifies a container that uses an image that includes `podman` (for example, `quay.io/podman/stable`).
39+
`args`:: This command keeps the container running so you can `exec` into it.
40+
`securityContext`:: This annotation specifies a `securityContext` that runs the container as an unprivileged user (for example, `runAsUser: 1000`).
4041
+
4142
[NOTE]
4243
====
4344
Depending on your cluster's Security Context Constraints (SCCs) or other policies, you might need to further adjust the `securityContext` specification, for example, by allowing specific capabilities if `/dev/fuse` alone is not sufficient for `fuse-overlayfs` to operate.
4445
====
4546
+
46-
* Create the pod by running the following command:
47+
.. Create the pod by running the following command:
4748
+
4849
[source,terminal]
4950
----
@@ -71,7 +72,15 @@ You are now inside the container. Because the default working directory might no
7172
[source,terminal]
7273
----
7374
$ cd /tmp
75+
----
76+
+
77+
[source,terminal]
78+
----
7479
$ pwd
80+
----
81+
+
82+
[source,terminal]
83+
----
7584
/tmp
7685
----
7786

@@ -115,21 +124,21 @@ This should output the content of the `/app/build_info.txt` file and the copied
115124

116125
. Exit the pod and clean up:
117126
+
118-
* After you are done, exit the shell session in the pod:
127+
.. After you are done, exit the shell session in the pod:
119128
+
120129
[source,terminal]
121130
----
122131
$ exit
123132
----
124133
+
125-
* You can then delete the pod if it's no longer needed:
134+
.. Delete the pod if it's no longer needed:
126135
+
127136
[source,terminal]
128137
----
129138
$ oc delete pod fuse-builder-pod
130139
----
131140
+
132-
* Remove the local YAML file:
141+
.. Remove the local YAML file:
133142
+
134143
[source,terminal]
135144
----

modules/nw-control-dns-records-public-hosted-zone-azure.adoc

Lines changed: 16 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,25 @@ You can create Domain Name Server (DNS) records on a public or private DNS zone
2020
[source,terminal]
2121
----
2222
$ CLIENT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_id}} | base64 -d)
23+
----
24+
+
25+
[source,terminal]
26+
----
2327
$ CLIENT_SECRET=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_client_secret}} | base64 -d)
28+
----
29+
+
30+
[source,terminal]
31+
----
2432
$ RESOURCE_GROUP=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_resourcegroup}} | base64 -d)
33+
----
34+
+
35+
[source,terminal]
36+
----
2537
$ SUBSCRIPTION_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_subscription_id}} | base64 -d)
38+
----
39+
+
40+
[source,terminal]
41+
----
2642
$ TENANT_ID=$(oc get secrets azure-credentials -n kube-system --template={{.data.azure_tenant_id}} | base64 -d)
2743
----
2844

@@ -64,7 +80,6 @@ $ az network dns zone list --resource-group "${RESOURCE_GROUP}"
6480
$ az network private-dns zone list -g "${RESOURCE_GROUP}"
6581
----
6682

67-
6883
. Create a YAML file, for example, `external-dns-sample-azure.yaml`, that defines the `ExternalDNS` object:
6984
+
7085
.Example `external-dns-sample-azure.yaml` file

modules/oadp-using-ca-certificates-with-velero-command.adoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,10 @@ Server:
4444
[source,terminal]
4545
----
4646
$ CA_CERT=$(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}')
47-
47+
----
48+
+
49+
[source,terminal]
50+
----
4851
$ [[ -n $CA_CERT ]] && echo "$CA_CERT" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "cat > /tmp/your-cacert.txt" || echo "DPA BSL has no caCert"
4952
----
5053
+
@@ -72,4 +75,4 @@ $ oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c "ls /tmp/your-c
7275
/tmp/your-cacert.txt
7376
----
7477

75-
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.
78+
In a future release of OpenShift API for Data Protection (OADP), we plan to mount the certificate to the Velero pod so that this step is not required.

modules/op-authenticating-to-an-oci-registry.adoc

Lines changed: 21 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,21 @@ Before pushing signatures to an OCI registry, cluster administrators must config
1515
+
1616
[source,terminal]
1717
----
18-
$ export NAMESPACE=<namespace> <1>
19-
$ export SERVICE_ACCOUNT_NAME=<service_account> <2>
18+
$ export NAMESPACE=<namespace>
2019
----
21-
<1> The namespace associated with the service account.
22-
<2> The name of the service account.
20+
+
21+
where:
22+
+
23+
`<namespace>`:: The namespace associated with the service account.
24+
+
25+
[source,terminal]
26+
----
27+
$ export SERVICE_ACCOUNT_NAME=<service_account>
28+
----
29+
+
30+
where:
31+
+
32+
`<service_account>`:: The name of the service account.
2333

2434
. Create a Kubernetes secret.
2535
+
@@ -41,14 +51,14 @@ $ oc patch serviceaccount $SERVICE_ACCOUNT_NAME \
4151
----
4252
+
4353
If you patch the default `pipeline` service account that {pipelines-title} assigns to all task runs, the {pipelines-title} Operator will override the service account. As a best practice, you can perform the following steps:
44-
54+
+
4555
.. Create a separate service account to assign to user's task runs.
4656
+
4757
[source,terminal]
4858
----
4959
$ oc create serviceaccount <service_account_name>
5060
----
51-
61+
+
5262
.. Associate the service account to the task runs by setting the value of the `serviceaccountname` field in the task run template.
5363
+
5464
[source,yaml]
@@ -58,9 +68,12 @@ kind: TaskRun
5868
metadata:
5969
name: build-push-task-run-2
6070
spec:
61-
serviceAccountName: build-bot <1>
71+
serviceAccountName: build-bot
6272
taskRef:
6373
name: build-push
6474
...
6575
----
66-
<1> Substitute with the name of the newly created service account.
76+
+
77+
where:
78+
+
79+
`<serviceAccountName>`:: Substitute with the name of the newly created service account.

modules/op-using-tekton-chains-to-sign-and-verify-image-and-provenance.adoc

Lines changed: 37 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@ $ cosign generate-key-pair k8s://openshift-pipelines/signing-secrets
3636
Provide a password when prompted. Cosign stores the resulting private key as part of the `signing-secrets` Kubernetes secret in the `openshift-pipelines` namespace, and writes the public key to the `cosign.pub` local file.
3737

3838
. Configure authentication for the image registry.
39-
39+
+
4040
.. To configure the {tekton-chains} controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section.
41-
41+
+
4242
.. To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker `config.json` file containing the required credentials.
4343
+
4444
[source,terminal]
@@ -54,33 +54,51 @@ $ oc create secret generic <docker_config_secret_name> \ <1>
5454
[source,terminal]
5555
----
5656
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}'
57-
57+
----
58+
+
59+
[source,terminal]
60+
----
5861
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}'
59-
62+
----
63+
+
64+
[source,terminal]
65+
----
6066
$ oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}'
6167
----
6268

6369
. Start the Kaniko task.
64-
70+
+
6571
.. Apply the Kaniko task to the cluster.
6672
+
6773
[source,terminal]
6874
----
6975
$ oc apply -f examples/kaniko/kaniko.yaml <1>
7076
----
71-
<1> Substitute with the URI or file path to your Kaniko task.
72-
77+
+
78+
where:
79+
+
80+
`<examples/kaniko/kaniko.yaml>`:: Substitute with the URI or file path to your Kaniko task.
81+
+
7382
.. Set the appropriate environment variables.
7483
+
7584
[source,terminal]
7685
----
77-
$ export REGISTRY=<url_of_registry> <1>
78-
86+
$ export REGISTRY=<url_of_registry>
87+
----
88+
+
89+
where:
90+
+
91+
`<url_of_registry>`:: Substitute with the URL of the registry where you want to push the image.
92+
+
93+
[source,terminal]
94+
----
7995
$ export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> <2>
8096
----
81-
<1> Substitute with the URL of the registry where you want to push the image.
82-
<2> Substitute with the name of the secret in the docker `config.json` file.
83-
97+
+
98+
where:
99+
+
100+
`<name_of_the_secret_in_docker_config_json>`:: Substitute with the name of the secret in the docker `config.json` file.
101+
+
84102
.. Start the Kaniko task.
85103
+
86104
[source,terminal]
@@ -109,14 +127,17 @@ $ oc get tr <task_run_name> \ <1>
109127
[source,terminal]
110128
----
111129
$ cosign verify --key cosign.pub $REGISTRY/kaniko-chains
112-
130+
----
131+
+
132+
[source,terminal]
133+
----
113134
$ cosign verify-attestation --key cosign.pub $REGISTRY/kaniko-chains
114135
----
115136

116137
. Find the provenance for the image in Rekor.
117-
138+
+
118139
.. Get the digest of the $REGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest.
119-
140+
+
120141
.. Search Rekor to find all entries that match the `sha256` digest of the image.
121142
+
122143
[source,terminal]
@@ -132,7 +153,7 @@ $ rekor-cli search --sha <image_digest> <1>
132153
<3> The second matching UUID.
133154
+
134155
The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation.
135-
156+
+
136157
.. Check the attestation.
137158
+
138159
[source,terminal]

0 commit comments

Comments
 (0)