You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/configuring-vsphere-host-groups.adoc
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,6 @@ To enable host group support, you must define multiple failure domains for your
35
35
====
36
36
If you specify different names for the `openshift-region` and `openshift-zone` vCenter tag categories, the installation of the {product-title} cluster fails.
. Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements.
79
-
78
+
+
80
79
.Sample `install-config.yaml` file with multiple host groups
Copy file name to clipboardExpand all lines: modules/configuring-vsphere-regions-zones.adoc
+8-7Lines changed: 8 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,6 +20,7 @@ The default `install-config.yaml` file configuration from the previous release o
20
20
====
21
21
You must specify at least one failure domain for your {product-title} cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your {product-title} cluster.
22
22
====
23
+
+
23
24
* You have installed the `govc` command line tool.
. Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements.
78
-
79
+
+
79
80
.Sample `install-config.yaml` file with multiple data centers defined in a vSphere center
Copy file name to clipboardExpand all lines: modules/dynamic-plug-in-development.adoc
+15-2Lines changed: 15 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@
9
9
You can run the plugin using a local development environment. The {product-title} web console runs in a container connected to the cluster you have logged into.
10
10
11
11
.Prerequisites
12
+
12
13
* You must have cloned the link:https://github.com/openshift/console-plugin-template[`console-plugin-template`] repository, which contains a template for creating plugins.
13
14
+
14
15
[IMPORTANT]
@@ -40,7 +41,6 @@ $ yarn install
40
41
----
41
42
42
43
. After installing, run the following command to start yarn.
43
-
44
44
+
45
45
[source,terminal]
46
46
----
@@ -68,11 +68,24 @@ The `yarn run start-console` command runs an `amd64` image and might fail when r
68
68
[source,terminal]
69
69
----
70
70
$ podman machine ssh
71
+
----
72
+
73
+
[source,terminal]
74
+
----
71
75
$ sudo -i
76
+
----
77
+
78
+
[source,terminal]
79
+
----
72
80
$ rpm-ostree install qemu-user-static
81
+
----
82
+
83
+
[source,terminal]
84
+
----
73
85
$ systemctl reboot
74
86
----
75
87
====
76
88
77
89
.Verification
78
-
* Visit link:http://localhost:9000/example[localhost:9000] to view the running plugin. Inspect the value of `window.SERVER_FLAGS.consolePlugins` to see the list of plugins which load at runtime.
90
+
91
+
* Visit link:http://localhost:9000/example[localhost:9000] to view the running plugin. Inspect the value of `window.SERVER_FLAGS.consolePlugins` to see the list of plugins which load at runtime.
Copy file name to clipboardExpand all lines: modules/gitops-default-permissions-of-an-argocd-instance.adoc
+10-6Lines changed: 10 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,34 +9,38 @@
9
9
10
10
By default Argo CD instance has the following permissions:
11
11
12
-
* Argo CD instance has the `admin` privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the **foo** namespace has the `admin` privileges to manage resources only for that namespace.
12
+
* Argo CD instance has the `admin` privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the *foo* namespace has the `admin` privileges to manage resources only for that namespace.
13
13
14
14
* Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide `read` privileges on resources to function appropriately:
15
15
+
16
-
[source,yaml]
16
+
[source,yaml,subs="attributes+"]
17
17
----
18
18
- verbs:
19
19
- get
20
20
- list
21
21
- watch
22
22
apiGroups:
23
-
- '*'
23
+
- /'*'
24
24
resources:
25
-
- '*'
25
+
- /'*'
26
26
- verbs:
27
27
- get
28
28
- list
29
29
nonResourceURLs:
30
-
- '*'
30
+
- /'*'
31
31
----
32
32
33
33
[NOTE]
34
34
====
35
35
* You can edit the cluster roles used by the `argocd-server` and `argocd-application-controller` components where Argo CD is running such that the `write` privileges are limited to only the namespaces and resources that you wish Argo CD to manage.
$ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd
101
102
----
102
-
103
+
+
103
104
.... Find a pod that is running and set its name as the value of `ETCD_POD: ETCD_POD=etcd-0`, and then copy its snapshot database by entering the following command:
0 commit comments