Skip to content

Commit af73e5b

Browse files
authored
Merge pull request #101819 from dfitzmau/OSDOCS-17072-batch3
OSDOCS-17072-batch3
2 parents 315f38d + 69dc8eb commit af73e5b

17 files changed

+417
-130
lines changed

modules/configuring-vsphere-host-groups.adoc

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,6 @@ To enable host group support, you must define multiple failure domains for your
3535
====
3636
If you specify different names for the `openshift-region` and `openshift-zone` vCenter tag categories, the installation of the {product-title} cluster fails.
3737
====
38-
3938
+
4039
[source,terminal]
4140
----
@@ -76,7 +75,7 @@ $ govc tags.attach -c <zone_tag_category> <zone_tag_for_host_group_1> /<datacent
7675
----
7776

7877
. Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements.
79-
78+
+
8079
.Sample `install-config.yaml` file with multiple host groups
8180

8281
[source,yaml]
@@ -122,4 +121,4 @@ platform:
122121
datastore: "/<data_center_1>/datastore/<datastore_1>"
123122
resourcePool: "/<data_center_1>/host/<cluster_1>/Resources/<resourcePool_1>"
124123
folder: "/<data_center_1>/vm/<folder_1>"
125-
----
124+
----

modules/configuring-vsphere-regions-zones.adoc

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ The default `install-config.yaml` file configuration from the previous release o
2020
====
2121
You must specify at least one failure domain for your {product-title} cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your {product-title} cluster.
2222
====
23+
+
2324
* You have installed the `govc` command line tool.
2425
+
2526
[IMPORTANT]
@@ -75,30 +76,30 @@ $ govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cl
7576
----
7677

7778
. Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements.
78-
79+
+
7980
.Sample `install-config.yaml` file with multiple data centers defined in a vSphere center
8081

8182
[source,yaml]
8283
----
83-
---
84+
# ...
8485
compute:
8586
---
8687
vsphere:
8788
zones:
8889
- "<machine_pool_zone_1>"
8990
- "<machine_pool_zone_2>"
90-
---
91+
# ...
9192
controlPlane:
92-
---
93+
# ...
9394
vsphere:
9495
zones:
9596
- "<machine_pool_zone_1>"
9697
- "<machine_pool_zone_2>"
97-
---
98+
# ...
9899
platform:
99100
vsphere:
100101
vcenters:
101-
---
102+
# ...
102103
datacenters:
103104
- <data_center_1_name>
104105
- <data_center_2_name>
@@ -127,5 +128,5 @@ platform:
127128
datastore: "/<data_center_2>/datastore/<datastore2>"
128129
resourcePool: "/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>"
129130
folder: "/<data_center_2>/vm/<folder2>"
130-
---
131+
# ...
131132
----

modules/dynamic-plug-in-development.adoc

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
You can run the plugin using a local development environment. The {product-title} web console runs in a container connected to the cluster you have logged into.
1010

1111
.Prerequisites
12+
1213
* You must have cloned the link:https://github.com/openshift/console-plugin-template[`console-plugin-template`] repository, which contains a template for creating plugins.
1314
+
1415
[IMPORTANT]
@@ -40,7 +41,6 @@ $ yarn install
4041
----
4142

4243
. After installing, run the following command to start yarn.
43-
4444
+
4545
[source,terminal]
4646
----
@@ -68,11 +68,24 @@ The `yarn run start-console` command runs an `amd64` image and might fail when r
6868
[source,terminal]
6969
----
7070
$ podman machine ssh
71+
----
72+
73+
[source,terminal]
74+
----
7175
$ sudo -i
76+
----
77+
78+
[source,terminal]
79+
----
7280
$ rpm-ostree install qemu-user-static
81+
----
82+
83+
[source,terminal]
84+
----
7385
$ systemctl reboot
7486
----
7587
====
7688

7789
.Verification
78-
* Visit link:http://localhost:9000/example[localhost:9000] to view the running plugin. Inspect the value of `window.SERVER_FLAGS.consolePlugins` to see the list of plugins which load at runtime.
90+
91+
* Visit link:http://localhost:9000/example[localhost:9000] to view the running plugin. Inspect the value of `window.SERVER_FLAGS.consolePlugins` to see the list of plugins which load at runtime.

modules/gitops-default-permissions-of-an-argocd-instance.adoc

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,34 +9,38 @@
99

1010
By default Argo CD instance has the following permissions:
1111

12-
* Argo CD instance has the `admin` privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the **foo** namespace has the `admin` privileges to manage resources only for that namespace.
12+
* Argo CD instance has the `admin` privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the *foo* namespace has the `admin` privileges to manage resources only for that namespace.
1313

1414
* Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide `read` privileges on resources to function appropriately:
1515
+
16-
[source,yaml]
16+
[source,yaml,subs="attributes+"]
1717
----
1818
- verbs:
1919
- get
2020
- list
2121
- watch
2222
apiGroups:
23-
- '*'
23+
- /'*'
2424
resources:
25-
- '*'
25+
- /'*'
2626
- verbs:
2727
- get
2828
- list
2929
nonResourceURLs:
30-
- '*'
30+
- /'*'
3131
----
3232

3333
[NOTE]
3434
====
3535
* You can edit the cluster roles used by the `argocd-server` and `argocd-application-controller` components where Argo CD is running such that the `write` privileges are limited to only the namespaces and resources that you wish Argo CD to manage.
36-
+
36+
3737
[source,terminal]
3838
----
3939
$ oc edit clusterrole argocd-server
40+
----
41+
42+
[source,terminal]
43+
----
4044
$ oc edit clusterrole argocd-application-controller
4145
----
4246
====

modules/hosted-cluster-etcd-backup-restore-on-premise.adoc

Lines changed: 25 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ include::snippets/technology-preview.adoc[]
1818
.Procedure
1919

2020
. First, set up your environment variables:
21-
21+
+
2222
.. Set up environment variables for your hosted cluster by entering the following commands, replacing values as necessary:
2323
+
2424
[source,terminal]
@@ -35,7 +35,7 @@ $ HOSTED_CLUSTER_NAMESPACE=clusters
3535
----
3636
$ CONTROL_PLANE_NAMESPACE="${HOSTED_CLUSTER_NAMESPACE}-${CLUSTER_NAME}"
3737
----
38-
38+
+
3939
.. Pause reconciliation of the hosted cluster by entering the following command, replacing values as necessary:
4040
+
4141
[source,terminal]
@@ -45,17 +45,18 @@ $ oc patch -n ${HOSTED_CLUSTER_NAMESPACE} hostedclusters/${CLUSTER_NAME} \
4545
----
4646

4747
. Next, take a snapshot of etcd by using one of the following methods:
48-
48+
+
4949
.. Use a previously backed-up snapshot of etcd.
50+
+
5051
.. If you have an available etcd pod, take a snapshot from the active etcd pod by completing the following steps:
51-
52+
+
5253
... List etcd pods by entering the following command:
5354
+
5455
[source,terminal]
5556
----
5657
$ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd
5758
----
58-
59+
+
5960
... Take a snapshot of the pod database and save it locally to your machine by entering the following commands:
6061
+
6162
[source,terminal]
@@ -73,7 +74,7 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} -c etcd -t ${ETCD_POD} -- \
7374
--endpoints=https://localhost:2379 \
7475
snapshot save /var/lib/snapshot.db
7576
----
76-
77+
+
7778
... Verify that the snapshot is successful by entering the following command:
7879
+
7980
[source,terminal]
@@ -82,15 +83,15 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} -c etcd -t ${ETCD_POD} -- \
8283
env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status \
8384
/var/lib/snapshot.db
8485
----
85-
86+
+
8687
.. Make a local copy of the snapshot by entering the following command:
8788
+
8889
[source,terminal]
8990
----
9091
$ oc cp -c etcd ${CONTROL_PLANE_NAMESPACE}/${ETCD_POD}:/var/lib/snapshot.db \
9192
/tmp/etcd.snapshot.db
9293
----
93-
94+
+
9495
... Make a copy of the snapshot database from etcd persistent storage:
9596
+
9697
.... List etcd pods by entering the following command:
@@ -99,7 +100,7 @@ $ oc cp -c etcd ${CONTROL_PLANE_NAMESPACE}/${ETCD_POD}:/var/lib/snapshot.db \
99100
----
100101
$ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd
101102
----
102-
103+
+
103104
.... Find a pod that is running and set its name as the value of `ETCD_POD: ETCD_POD=etcd-0`, and then copy its snapshot database by entering the following command:
104105
+
105106
[source,terminal]
@@ -115,16 +116,16 @@ $ oc cp -c etcd \
115116
----
116117
$ oc scale -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0
117118
----
118-
119+
+
119120
.. Delete volumes for second and third members by entering the following command:
120121
+
121122
[source,terminal]
122123
----
123124
$ oc delete -n ${CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2
124125
----
125-
126+
+
126127
.. Create a pod to access the first etcd member's data:
127-
128+
+
128129
... Get the etcd image by entering the following command:
129130
+
130131
[source,terminal]
@@ -135,7 +136,7 @@ $ ETCD_IMAGE=$(oc get -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd \
135136
+
136137
... Create a pod that allows access to etcd data:
137138
+
138-
[source,yaml]
139+
[source,yaml,subs="attributes+"]
139140
----
140141
$ cat << EOF | oc apply -n ${CONTROL_PLANE_NAMESPACE} -f -
141142
apiVersion: apps/v1
@@ -170,32 +171,32 @@ spec:
170171
- name: data
171172
persistentVolumeClaim:
172173
claimName: data-etcd-0
173-
EOF
174+
EOF
174175
----
175-
176+
+
176177
... Check the status of the `etcd-data` pod and wait for it to be running by entering the following command:
177178
+
178179
[source,terminal]
179180
----
180181
$ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data
181182
----
182-
183+
+
183184
... Get the name of the `etcd-data` pod by entering the following command:
184185
+
185186
[source,terminal]
186187
----
187188
$ DATA_POD=$(oc get -n ${CONTROL_PLANE_NAMESPACE} pods --no-headers \
188189
-l app=etcd-data -o name | cut -d/ -f2)
189190
----
190-
191+
+
191192
.. Copy an etcd snapshot into the pod by entering the following command:
192193
+
193194
[source,terminal]
194195
----
195196
$ oc cp /tmp/etcd.snapshot.db \
196197
${CONTROL_PLANE_NAMESPACE}/${DATA_POD}:/var/lib/restored.snap.db
197198
----
198-
199+
+
199200
.. Remove old data from the `etcd-data` pod by entering the following commands:
200201
+
201202
[source,terminal]
@@ -207,7 +208,7 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- rm -rf /var/lib/data
207208
----
208209
$ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- mkdir -p /var/lib/data
209210
----
210-
211+
+
211212
.. Restore the etcd snapshot by entering the following command:
212213
+
213214
[source,terminal]
@@ -220,29 +221,29 @@ $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- \
220221
--initial-cluster etcd-0=https://etcd-0.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380 \
221222
--initial-advertise-peer-urls https://etcd-0.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380
222223
----
223-
224+
+
224225
.. Remove the temporary etcd snapshot from the pod by entering the following command:
225226
+
226227
[source,terminal]
227228
----
228229
$ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- \
229230
rm /var/lib/restored.snap.db
230231
----
231-
232+
+
232233
.. Delete data access deployment by entering the following command:
233234
+
234235
[source,terminal]
235236
----
236237
$ oc delete -n ${CONTROL_PLANE_NAMESPACE} deployment/etcd-data
237238
----
238-
239+
+
239240
.. Scale up the etcd cluster by entering the following command:
240241
+
241242
[source,terminal]
242243
----
243244
$ oc scale -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3
244245
----
245-
246+
+
246247
.. Wait for the etcd member pods to return and report as available by entering the following command:
247248
+
248249
[source,terminal]
@@ -291,4 +292,4 @@ $ oc annotate hostedcluster -n \
291292
--overwrite
292293
----
293294
+
294-
After a few minutes, the control plane pods start running.
295+
After a few minutes, the control plane pods start running.

0 commit comments

Comments
 (0)