Skip to content

Commit 1c2bed1

Browse files
committed
OSDOCS-170721-batch1
1 parent df8d30e commit 1c2bed1

12 files changed

+160
-82
lines changed

modules/admin-limit-operations.adoc

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Shown here is an example procedure to follow for creating a limit range.
1212

1313
.Procedure
1414

15-
. Create the object:
15+
* Create the object:
1616
+
1717
[source,terminal]
1818
----
@@ -67,9 +67,8 @@ openshift.io/ImageStream openshift.io/image-tags - 10 -
6767
== Deleting a limit range
6868

6969
To remove a limit range, run the following command:
70-
+
70+
7171
[source,terminal]
7272
----
7373
$ oc delete limits <limit_name>
7474
----
75-
S

modules/admin-quota-usage.adoc

Lines changed: 52 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -40,11 +40,11 @@ metadata:
4040
name: core-object-counts
4141
spec:
4242
hard:
43-
configmaps: "10" <1>
44-
persistentvolumeclaims: "4" <2>
45-
replicationcontrollers: "20" <3>
46-
secrets: "10" <4>
47-
services: "10" <5>
43+
configmaps: "10" # <1>
44+
persistentvolumeclaims: "4" # <2>
45+
replicationcontrollers: "20" # <3>
46+
secrets: "10" # <4>
47+
services: "10" # <5>
4848
----
4949
<1> The total number of `ConfigMap` objects that can exist in the project.
5050
<2> The total number of persistent volume claims (PVCs) that can exist in the project.
@@ -63,7 +63,7 @@ metadata:
6363
name: openshift-object-counts
6464
spec:
6565
hard:
66-
openshift.io/imagestreams: "10" <1>
66+
openshift.io/imagestreams: "10" # <1>
6767
----
6868
<1> The total number of image streams that can exist in the project.
6969

@@ -78,13 +78,13 @@ metadata:
7878
name: compute-resources
7979
spec:
8080
hard:
81-
pods: "4" <1>
82-
requests.cpu: "1" <2>
83-
requests.memory: 1Gi <3>
84-
requests.ephemeral-storage: 2Gi <4>
85-
limits.cpu: "2" <5>
86-
limits.memory: 2Gi <6>
87-
limits.ephemeral-storage: 4Gi <7>
81+
pods: "4" # <1>
82+
requests.cpu: "1" # <2>
83+
requests.memory: 1Gi # <3>
84+
requests.ephemeral-storage: 2Gi # <4>
85+
limits.cpu: "2" # <5>
86+
limits.memory: 2Gi # <6>
87+
limits.ephemeral-storage: 4Gi # <7>
8888
----
8989
<1> The total number of pods in a non-terminal state that can exist in the project.
9090
<2> Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core.
@@ -105,9 +105,9 @@ metadata:
105105
name: besteffort
106106
spec:
107107
hard:
108-
pods: "1" <1>
108+
pods: "1" # <1>
109109
scopes:
110-
- BestEffort <2>
110+
- BestEffort # <2>
111111
----
112112
<1> The total number of pods in a non-terminal state with *BestEffort* quality of service that can exist in the project.
113113
<2> Restricts the quota to only matching pods that have *BestEffort* quality of service for either memory or CPU.
@@ -122,10 +122,10 @@ metadata:
122122
name: compute-resources-long-running
123123
spec:
124124
hard:
125-
pods: "4" <1>
126-
limits.cpu: "4" <2>
127-
limits.memory: "2Gi" <3>
128-
limits.ephemeral-storage: "4Gi" <4>
125+
pods: "4" # <1>
126+
limits.cpu: "4" # <2>
127+
limits.memory: "2Gi" # <3>
128+
limits.ephemeral-storage: "4Gi" # <4>
129129
scopes:
130130
- NotTerminating <5>
131131
----
@@ -145,10 +145,10 @@ metadata:
145145
name: compute-resources-time-bound
146146
spec:
147147
hard:
148-
pods: "2" <1>
149-
limits.cpu: "1" <2>
150-
limits.memory: "1Gi" <3>
151-
limits.ephemeral-storage: "1Gi" <4>
148+
pods: "2" # <1>
149+
limits.cpu: "1" # <2>
150+
limits.memory: "1Gi" # <3>
151+
limits.ephemeral-storage: "1Gi" # <4>
152152
scopes:
153153
- Terminating <5>
154154
----
@@ -169,13 +169,13 @@ metadata:
169169
name: storage-consumption
170170
spec:
171171
hard:
172-
persistentvolumeclaims: "10" <1>
173-
requests.storage: "50Gi" <2>
174-
gold.storageclass.storage.k8s.io/requests.storage: "10Gi" <3>
175-
silver.storageclass.storage.k8s.io/requests.storage: "20Gi" <4>
176-
silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" <5>
177-
bronze.storageclass.storage.k8s.io/requests.storage: "0" <6>
178-
bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" <7>
172+
persistentvolumeclaims: "10" # <1>
173+
requests.storage: "50Gi" # <2>
174+
gold.storageclass.storage.k8s.io/requests.storage: "10Gi" # <3>
175+
silver.storageclass.storage.k8s.io/requests.storage: "20Gi" # <4>
176+
silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" # <5>
177+
bronze.storageclass.storage.k8s.io/requests.storage: "0" # <6>
178+
bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" # <7>
179179
----
180180
<1> The total number of persistent volume claims in a project
181181
<2> Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value.
@@ -221,8 +221,16 @@ $ oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource
221221
----
222222
$ oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4
223223
resourcequota "test" created
224+
----
224225

226+
[source,terminal]
227+
----
225228
$ oc describe quota test
229+
----
230+
231+
.Example output
232+
[source,terminal]
233+
----
226234
Name: test
227235
Namespace: quota
228236
Resource Used Hard
@@ -243,23 +251,30 @@ You can also use the CLI to view quota details:
243251

244252
. First, get the list of quotas defined in the project. For example, for a project called `demoproject`:
245253
+
246-
247254
[source,terminal]
248255
----
249256
$ oc get quota -n demoproject
257+
----
258+
+
259+
.Example output
260+
[source,terminal]
261+
----
250262
NAME AGE
251263
besteffort 11m
252264
compute-resources 2m
253265
core-object-counts 29m
254266
----
255267

256-
257268
. Describe the quota you are interested in, for example the `core-object-counts` quota:
258269
+
259-
260270
[source,terminal]
261271
----
262272
$ oc describe quota core-object-counts -n demoproject
273+
----
274+
+
275+
.Example output
276+
[source,terminal]
277+
----
263278
Name: core-object-counts
264279
Namespace: demoproject
265280
Resource Used Hard
@@ -299,6 +314,10 @@ After making any changes, restart the controller services to apply them.
299314
[source,terminal]
300315
----
301316
$ master-restart api
317+
----
318+
319+
[source,terminal]
320+
----
302321
$ master-restart controllers
303322
----
304323

@@ -337,7 +356,6 @@ admissionConfig:
337356
<1> The group or resource to whose consumption is limited by default.
338357
<2> The name of the resource tracked by quota associated with the group/resource to limit by default.
339358

340-
341359
In the above example, the quota system intercepts every operation that creates or updates a `PersistentVolumeClaim`. It checks what resources controlled by quota would be consumed. If there is no covering quota for those resources in the project, the request is denied. In this example, if a user creates a `PersistentVolumeClaim` that uses storage associated with the gold storage class and there is no matching quota in the project, the request is denied.
342360

343361
endif::[]

modules/ai-adding-worker-nodes-to-cluster.adoc

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -31,14 +31,14 @@ $ export API_URL=<api_url> <1>
3131
<1> Replace `<api_url>` with the Assisted Installer API URL, for example, `https://api.openshift.com`
3232

3333
. Import the {sno} cluster by running the following commands:
34-
34+
+
3535
.. Set the `$OPENSHIFT_CLUSTER_ID` variable. Log in to the cluster and run the following command:
3636
+
3737
[source,terminal]
3838
----
3939
$ export OPENSHIFT_CLUSTER_ID=$(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')
4040
----
41-
41+
+
4242
.. Set the `$CLUSTER_REQUEST` variable that is used to import the cluster:
4343
+
4444
[source,terminal]
@@ -51,7 +51,7 @@ $ export CLUSTER_REQUEST=$(jq --null-input --arg openshift_cluster_id "$OPENSHIF
5151
----
5252
<1> Replace `<api_vip>` with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the worker node can reach. For example, `api.compute-1.example.com`.
5353
<2> Replace `<openshift_cluster_name>` with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation.
54-
54+
+
5555
.. Import the cluster and set the `$CLUSTER_ID` variable. Run the following command:
5656
+
5757
[source,terminal]
@@ -61,9 +61,9 @@ $ CLUSTER_ID=$(curl "$API_URL/api/assisted-install/v2/clusters/import" -H "Autho
6161
----
6262

6363
. Generate the `InfraEnv` resource for the cluster and set the `$INFRA_ENV_ID` variable by running the following commands:
64-
64+
+
6565
.. Download the pull secret file from Red Hat OpenShift Cluster Manager at link:console.redhat.com/openshift/install/pull-secret[console.redhat.com].
66-
66+
+
6767
.. Set the `$INFRA_ENV_REQUEST` variable:
6868
+
6969
[source,terminal]
@@ -83,7 +83,7 @@ export INFRA_ENV_REQUEST=$(jq --null-input \
8383
<2> Replace `<path_to_ssh_pub_key>` with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode.
8484
<3> Replace `<infraenv_name>` with the plain text name for the `InfraEnv` resource.
8585
<4> Replace `<iso_image_type>` with the ISO image type, either `full-iso` or `minimal-iso`.
86-
86+
+
8787
.. Post the `$INFRA_ENV_REQUEST` to the link:https://api.openshift.com/?urls.primaryName=assisted-service%20service#/installer/RegisterInfraEnv[/v2/infra-envs] API and set the `$INFRA_ENV_ID` variable:
8888
+
8989
[source,terminal]

modules/albo-deleting.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,10 @@ $ oc delete subscription aws-load-balancer-operator -n aws-load-balancer-operato
2222
$ aws iam detach-role-policy \
2323
--role-name "<cluster-id>-alb-operator" \
2424
--policy-arn <operator-policy-arn>
25+
----
26+
+
27+
[source,terminal]
28+
----
2529
$ aws iam delete-role \
2630
--role-name "<cluster-id>-alb-operator"
2731
----
@@ -31,4 +35,4 @@ $ aws iam delete-role \
3135
[source,terminal]
3236
----
3337
$ aws iam delete-policy --policy-arn <operator-policy-arn>
34-
----
38+
----

modules/albo-installation.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,7 @@ $ aws iam create-role --role-name "${CLUSTER_NAME}-alb-operator" \
7070
Take note of the Operator role ARN in the output. This is referred to as the `$OPERATOR_ROLE_ARN` for the remainder of this process.
7171
.. Associate the Operator role and policy:
7272
+
73+
[source,terminal]
7374
----
7475
$ aws iam attach-role-policy --role-name "${CLUSTER_NAME}-alb-operator" \
7576
--policy-arn $OPERATOR_POLICY_ARN
@@ -160,6 +161,7 @@ Take note of the Controller role ARN in the output. This is referred to as the `
160161

161162
.. Associate the Controller role and policy:
162163
+
164+
[source,terminal]
163165
----
164166
$ aws iam attach-role-policy \
165167
--role-name "${CLUSTER_NAME}-albo-controller" \

modules/albo-prerequisites.adoc

Lines changed: 29 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,10 +44,30 @@ $ oc login --token=<token> --server=<cluster_url>
4444
[source,terminal]
4545
----
4646
$ export CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.apiServerURL}" | sed 's|^https://||' | awk -F . '{print $2}')
47+
----
48+
+
49+
[source,terminal]
50+
----
4751
$ export REGION=$(oc get infrastructure cluster -o=jsonpath="{.status.platformStatus.aws.region}")
52+
----
53+
+
54+
[source,terminal]
55+
----
4856
$ export OIDC_ENDPOINT=$(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||')
57+
----
58+
+
59+
[source,terminal]
60+
----
4961
$ export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
62+
----
63+
+
64+
[source,terminal]
65+
----
5066
$ export SCRATCH="/tmp/${CLUSTER_NAME}/alb-operator"
67+
----
68+
+
69+
[source,terminal]
70+
----
5171
$ mkdir -p ${SCRATCH}
5272
----
5373
+
@@ -91,7 +111,15 @@ You must tag your AWS VPC resources before you install the AWS Load Balancer Ope
91111
[source,terminal]
92112
----
93113
$ export VPC_ID=<vpc-id>
114+
----
115+
+
116+
[source,terminal]
117+
----
94118
$ export PUBLIC_SUBNET_IDS="<public-subnet-a-id> <public-subnet-b-id> <public-subnet-c-id>"
119+
----
120+
+
121+
[source,terminal]
122+
----
95123
$ export PRIVATE_SUBNET_IDS="<private-subnet-a-id> <private-subnet-b-id> <private-subnet-c-id>"
96124
----
97125

@@ -127,4 +155,4 @@ EOF
127155
[source,bash]
128156
----
129157
bash ${SCRATCH}/tag-subnets.sh
130-
----
158+
----

modules/albo-validate-install.adoc

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,10 @@ ALB provisioning takes a few minutes. If you receive an error that says `curl: (
8282
----
8383
$ ALB_INGRESS=$(oc -n hello-world get ingress hello-openshift-alb \
8484
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
85+
----
86+
+
87+
[source,termnial]
88+
----
8589
$ curl "http://${ALB_INGRESS}"
8690
----
8791
+
@@ -127,18 +131,18 @@ NLB provisioning takes a few minutes. If you receive an error that says `curl: (
127131
----
128132
$ NLB=$(oc -n hello-world get service hello-openshift-nlb \
129133
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
130-
$ curl "http://${NLB}"
131134
----
132135
+
133-
.Example output
134-
[source,text]
136+
[source,termnial]
135137
----
136-
Hello OpenShift!
138+
$ curl "http://${NLB}"
137139
----
140+
+
141+
Expected output shows `Hello OpenShift!`.
138142

139143
. You can now delete the sample application and all resources in the `hello-world` namespace.
140144
+
141145
[source,terminal]
142146
----
143147
$ oc delete project hello-world
144-
----
148+
----

modules/cluster-logging-collector-log-forward-cloudwatch.adoc

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,16 @@ To generate log data for this example, you run a `busybox` pod in a namespace ca
106106
[source,terminal]
107107
----
108108
$ oc run busybox --image=busybox -- sh -c 'while true; do echo "My life is my message"; sleep 3; done'
109+
----
110+
111+
[source,terminal]
112+
----
109113
$ oc logs -f busybox
114+
----
115+
116+
.Example output
117+
[source,terminal]
118+
----
110119
My life is my message
111120
My life is my message
112121
My life is my message

0 commit comments

Comments
 (0)