Skip to content

Commit ddb3a6d

Browse files
authored
Merge pull request #97879 from brendan-daly-red-hat/OSDOCS-15411_a
OSDOCS-15411_a#CQA updates
2 parents 943e558 + 0c78a39 commit ddb3a6d

14 files changed

+88
-105
lines changed

modules/nodes-pods-autoscaling-about.adoc

Lines changed: 12 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -6,30 +6,16 @@
66
[id="nodes-pods-autoscaling-about_{context}"]
77
= Understanding horizontal pod autoscalers
88

9-
You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods
10-
you want to run, as well as the CPU utilization or memory utilization your pods should target.
11-
12-
After you create a horizontal pod autoscaler, {product-title} begins to query the CPU and/or memory resource metrics on the pods.
13-
When these metrics are available, the horizontal pod autoscaler computes
14-
the ratio of the current metric utilization with the desired metric utilization,
15-
and scales up or down accordingly. The query and scaling occurs at a regular interval,
16-
but can take one to two minutes before metrics become available.
17-
18-
For replication controllers, this scaling corresponds directly to the replicas
19-
of the replication controller. For deployment configurations, scaling corresponds
20-
directly to the replica count of the deployment configuration. Note that autoscaling
21-
applies only to the latest deployment in the `Complete` phase.
22-
23-
{product-title} automatically accounts for resources and prevents unnecessary autoscaling
24-
during resource spikes, such as during start up. Pods in the `unready` state
25-
have `0 CPU` usage when scaling up and the autoscaler ignores the pods when scaling down.
26-
Pods without known metrics have `0% CPU` usage when scaling up and `100% CPU` when scaling down.
27-
This allows for more stability during the HPA decision. To use this feature, you must configure
28-
readiness checks to determine if a new pod is ready for use.
9+
You can create a horizontal pod autoscaler to specify the minimum and maximum number of pods you want to run, and the CPU usage or memory usage your pods should target.
10+
11+
After you create a horizontal pod autoscaler, {product-title} begins to query the CPU, memory, or both resource metrics on the pods. When these metrics are available, the horizontal pod autoscaler computes the ratio of the current metric use with the intended metric use, and scales up or down as needed. The query and scaling occurs at a regular interval, but can take one to two minutes before metrics become available.
12+
13+
For replication controllers, this scaling corresponds directly to the replicas of the replication controller. For deployment, scaling corresponds directly to the replica count of the deployment. Note that autoscaling applies only to the latest deployment in the `Complete` phase.
14+
15+
{product-title} automatically accounts for resources and prevents unnecessary autoscaling during resource spikes, such as during start up. Pods in the `unready` state have `0 CPU` usage when scaling up and the autoscaler ignores the pods when scaling down. Pods without known metrics have `0% CPU` usage when scaling up and `100% CPU` when scaling down. This allows for more stability during the HPA decision. To use this feature, you must configure readiness checks to determine if a new pod is ready for use.
2916

3017
ifdef::openshift-origin,openshift-enterprise,openshift-webscale[]
31-
To use horizontal pod autoscalers, your cluster administrator must have
32-
properly configured cluster metrics.
18+
To use horizontal pod autoscalers, your cluster administrator must have properly configured cluster metrics.
3319
endif::openshift-origin,openshift-enterprise,openshift-webscale[]
3420

3521
== Supported metrics
@@ -43,27 +29,24 @@ The following metrics are supported by horizontal pod autoscalers:
4329
|Metric |Description |API version
4430

4531
|CPU utilization
46-
|Number of CPU cores used. Can be used to calculate a percentage of the pod's requested CPU.
32+
|Number of CPU cores used. You can use this to calculate a percentage of the pod's requested CPU.
4733
|`autoscaling/v1`, `autoscaling/v2`
4834

4935
|Memory utilization
50-
|Amount of memory used. Can be used to calculate a percentage of the pod's requested memory.
36+
|Amount of memory used. You can use this to calculate a percentage of the pod's requested memory.
5137
|`autoscaling/v2`
5238
|===
5339

5440
[IMPORTANT]
5541
====
56-
For memory-based autoscaling, memory usage must increase and decrease
57-
proportionally to the replica count. On average:
42+
For memory-based autoscaling, memory usage must increase and decrease proportionally to the replica count. On average:
5843
5944
* An increase in replica count must lead to an overall decrease in memory
6045
(working set) usage per-pod.
6146
* A decrease in replica count must lead to an overall increase in per-pod memory
6247
usage.
6348
64-
Use the {product-title} web console to check the memory behavior of your application
65-
and ensure that your application meets these requirements before using
66-
memory-based autoscaling.
49+
Use the {product-title} web console to check the memory behavior of your application and ensure that your application meets these requirements before using memory-based autoscaling.
6750
====
6851

6952
The following example shows autoscaling for the `hello-node` `Deployment` object. The initial deployment requires 3 pods. The HPA object increases the minimum to 5. If CPU usage on the pods reaches 75%, the pods increase to 7:

modules/nodes-pods-autoscaling-best-practices-hpa.adoc

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,13 @@
66
[id="nodes-pods-autoscaling-best-practices-hpa_{context}"]
77
= Best practices
88

9-
.All pods must have resource requests configured
10-
The HPA makes a scaling decision based on the observed CPU or memory utilization values of pods in an {product-title} cluster. Utilization values are calculated as a percentage of the resource requests of each pod.
11-
Missing resource request values can affect the optimal performance of the HPA.
9+
For optimal performance, configure resource requests for all pods. To prevent frequent replica fluctuations, configure the cooldown period.
1210

13-
.Configure the cool down period
14-
During horizontal pod autoscaling, there might be a rapid scaling of events without a time gap. Configure the cool down period to prevent frequent replica fluctuations.
15-
You can specify a cool down period by configuring the `stabilizationWindowSeconds` field. The stabilization window is used to restrict the fluctuation of replicas count when the metrics used for scaling keep fluctuating.
16-
The autoscaling algorithm uses this window to infer a previous desired state and avoid unwanted changes to workload scale.
11+
All pods must have resource requests configured::
12+
The HPA makes a scaling decision based on the observed CPU or memory usage values of pods in an {product-title} cluster. Utilization values are calculated as a percentage of the resource requests of each pod. Missing resource request values can affect the optimal performance of the HPA.
13+
14+
Configure the cool down period::
15+
During horizontal pod autoscaling, there might be a rapid scaling of events without a time gap. Configure the cool down period to prevent frequent replica fluctuations. You can specify a cool down period by configuring the `stabilizationWindowSeconds` field. The stabilization window is used to restrict the fluctuation of replicas count when the metrics used for scaling keep fluctuating. The autoscaling algorithm uses this window to infer a previous required state and avoid unwanted changes to workload scale.
1716

1817
For example, a stabilization window is specified for the `scaleDown` field:
1918

@@ -24,4 +23,4 @@ behavior:
2423
stabilizationWindowSeconds: 300
2524
----
2625

27-
In the above example, all desired states for the past 5 minutes are considered. This approximates a rolling maximum, and avoids having the scaling algorithm frequently remove pods only to trigger recreating an equivalent pod just moments later.
26+
In the previous example, all intended states for the past 5 minutes are considered. This approximates a rolling maximum, and avoids having the scaling algorithm often remove pods only to trigger recreating an equal pod just moments later.

modules/nodes-pods-autoscaling-policies.adoc

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,11 @@
22
//
33
// * nodes/nodes-pods-autoscaling.adoc
44

5+
:_mod-docs-content-type: CONCEPT
56
[id="nodes-pods-autoscaling-policies_{context}"]
67
= Scaling policies
78

8-
The `autoscaling/v2` API allows you to add _scaling policies_ to a horizontal pod autoscaler. A scaling policy controls how the {product-title} horizontal pod autoscaler (HPA) scales pods. Scaling policies allow you to restrict the rate that HPAs scale pods up or down by setting a specific number or specific percentage to scale in a specified period of time. You can also define a _stabilization window_, which uses previously computed desired states to control scaling if the metrics are fluctuating. You can create multiple policies for the same scaling direction, and determine which policy is used, based on the amount of change. You can also restrict the scaling by timed iterations. The HPA scales pods during an iteration, then performs scaling, as needed, in further iterations.
9+
Use the `autoscaling/v2` API to add _scaling policies_ to a horizontal pod autoscaler. A scaling policy controls how the {product-title} horizontal pod autoscaler (HPA) scales pods. Use scaling policies to restrict the rate that HPAs scale pods up or down by setting a specific number or specific percentage to scale in a specified period of time. You can also define a _stabilization window_, which uses previously computed required states to control scaling if the metrics are fluctuating. You can create multiple policies for the same scaling direction, and determine the policy to use, based on the amount of change. You can also restrict the scaling by timed iterations. The HPA scales pods during an iteration, then performs scaling, as needed, in further iterations.
910

1011
.Sample HPA object with a scaling policy
1112
[source, yaml]
@@ -45,8 +46,8 @@ spec:
4546
<4> Limits the amount of scaling, either the number of pods or percentage of pods, during each iteration. There is no default value for scaling down by number of pods.
4647
<5> Determines the length of a scaling iteration. The default value is `15` seconds.
4748
<6> The default value for scaling down by percentage is 100%.
48-
<7> Determines which policy to use first, if multiple policies are defined. Specify `Max` to use the policy that allows the highest amount of change, `Min` to use the policy that allows the lowest amount of change, or `Disabled` to prevent the HPA from scaling in that policy direction. The default value is `Max`.
49-
<8> Determines the time period the HPA should look back at desired states. The default value is `0`.
49+
<7> Determines the policy to use first, if multiple policies are defined. Specify `Max` to use the policy that allows the highest amount of change, `Min` to use the policy that allows the lowest amount of change, or `Disabled` to prevent the HPA from scaling in that policy direction. The default value is `Max`.
50+
<8> Determines the time period the HPA reviews the required states. The default value is `0`.
5051
<9> This example creates a policy for scaling up.
5152
<10> Limits the amount of scaling up by the number of pods. The default value for scaling up the number of pods is 4%.
5253
<11> Limits the amount of scaling up by the percentage of pods. The default value for scaling up by percentage is 100%.
@@ -80,7 +81,7 @@ spec:
8081

8182
In this example, when the number of pods is greater than 40, the percent-based policy is used for scaling down, as that policy results in a larger change, as required by the `selectPolicy`.
8283

83-
If there are 80 pod replicas, in the first iteration the HPA reduces the pods by 8, which is 10% of the 80 pods (based on the `type: Percent` and `value: 10` parameters), over one minute (`periodSeconds: 60`). For the next iteration, the number of pods is 72. The HPA calculates that 10% of the remaining pods is 7.2, which it rounds up to 8 and scales down 8 pods. On each subsequent iteration, the number of pods to be scaled is re-calculated based on the number of remaining pods. When the number of pods falls below 40, the pods-based policy is applied, because the pod-based number is greater than the percent-based number. The HPA reduces 4 pods at a time (`type: Pods` and `value: 4`), over 30 seconds (`periodSeconds: 30`), until there are 20 replicas remaining (`minReplicas`).
84+
If there are 80 pod replicas, in the first iteration the HPA reduces the pods by 8, which is 10% of the 80 pods (based on the `type: Percent` and `value: 10` parameters), over one minute (`periodSeconds: 60`). For the next iteration, the number of pods is 72. The HPA calculates that 10% of the remaining pods is 7.2, which it rounds up to 8 and scales down 8 pods. On each subsequent iteration, the number of pods to be scaled is re-calculated based on the number of remaining pods. When the number of pods falls to less than 40, the pods-based policy is applied, because the pod-based number is greater than the percent-based number. The HPA reduces 4 pods at a time (`type: Pods` and `value: 4`), over 30 seconds (`periodSeconds: 30`), until there are 20 replicas remaining (`minReplicas`).
8485

8586
The `selectPolicy: Disabled` parameter prevents the HPA from scaling up the pods. You can manually scale up by adjusting the number of replicas in the replica set or deployment set, if needed.
8687

modules/nodes-pods-vertical-autoscaler-about.adoc

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,35 +6,35 @@
66
[id="nodes-pods-vertical-autoscaler-about_{context}"]
77
= About the Vertical Pod Autoscaler Operator
88

9-
The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions that the VPA Operator should take with the pods associated with a specific workload object, such as a daemon set, replication controller, and so forth, in a project.
9+
The Vertical Pod Autoscaler Operator (VPA) is implemented as an API resource and a custom resource (CR). The CR determines the actions for the VPA to take with the pods associated with a specific workload object, such as a daemon set, replication controller, and so forth, in a project.
1010

11-
The VPA Operator consists of three components, each of which has its own pod in the VPA namespace:
11+
The VPA consists of three components, each of which has its own pod in the VPA namespace:
1212

1313
Recommender::
14-
The VPA recommender monitors the current and past resource consumption and, based on this data, determines the optimal CPU and memory resources for the pods in the associated workload object.
14+
The VPA recommender monitors the current and past resource consumption. Based on this data, the VPA recommender determines the optimal CPU and memory resources for the pods in the associated workload object.
1515

1616
Updater::
17-
The VPA updater checks if the pods in the associated workload object have the correct resources. If the resources are correct, the updater takes no action. If the resources are not correct, the updater kills the pod so that they can be recreated by their controllers with the updated requests.
17+
The VPA updater checks if the pods in the associated workload object have the correct resources. If the resources are correct, the updater takes no action. If the resources are not correct, the updater kills the pod so that pods' controllers can re-create them with the updated requests.
1818

1919
Admission controller::
20-
The VPA admission controller sets the correct resource requests on each new pod in the associated workload object, whether the pod is new or was recreated by its controller due to the VPA updater actions.
20+
The VPA admission controller sets the correct resource requests on each new pod in the associated workload object. This applies whether the pod is new or the controller re-created the pod due to the VPA updater actions.
2121

2222
You can use the default recommender or use your own alternative recommender to autoscale based on your own algorithms.
2323

24-
The default recommender automatically computes historic and current CPU and memory usage for the containers in those pods and uses this data to determine optimized resource limits and requests to ensure that these pods are operating efficiently at all times. For example, the default recommender suggests reduced resources for pods that are requesting more resources than they are using and increased resources for pods that are not requesting enough.
24+
The default recommender automatically computes historic and current CPU and memory usage for the containers in those pods. The default recommender uses this data to determine optimized resource limits and requests to ensure that these pods are operating efficiently at all times. For example, the default recommender suggests reduced resources for pods that are requesting more resources than they are using and increased resources for pods that are not requesting enough.
2525

26-
The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then redeploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before the pods are admitted to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed.
26+
The VPA then automatically deletes any pods that are out of alignment with these recommendations one at a time, so that your applications can continue to serve requests with no downtime. The workload objects then redeploy the pods with the original resource limits and requests. The VPA uses a mutating admission webhook to update the pods with optimized resource limits and requests before admitting the pods to a node. If you do not want the VPA to delete pods, you can view the VPA resource limits and requests and manually update the pods as needed.
2727

2828
[NOTE]
2929
====
30-
By default, workload objects must specify a minimum of two replicas in order for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA does update the new pods with its recommendations. You can change this minimum by modifying the `VerticalPodAutoscalerController` object as shown in _Changing the VPA minimum value_.
30+
By default, workload objects must specify a minimum of two replicas for the VPA to automatically delete their pods. Workload objects that specify fewer replicas than this minimum are not deleted. If you manually delete these pods, when the workload object redeploys the pods, the VPA updates the new pods with its recommendations. You can change this minimum by modifying the `VerticalPodAutoscalerController` object as shown in _Changing the VPA minimum value_.
3131
====
3232

3333
For example, if you have a pod that uses 50% of the CPU but only requests 10%, the VPA determines that the pod is consuming more CPU than requested and deletes the pod. The workload object, such as replica set, restarts the pods and the VPA updates the new pod with its recommended resources.
3434

35-
For developers, you can use the VPA to help ensure your pods stay up during periods of high demand by scheduling pods onto nodes that have appropriate resources for each pod.
35+
For developers, you can use the VPA to help ensure that your pods active during periods of high demand by scheduling pods onto nodes that have appropriate resources for each pod.
3636

37-
Administrators can use the VPA to better utilize cluster resources, such as preventing pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests that are specified in initial container configuration.
37+
Administrators can use the VPA to better use cluster resources, such as preventing pods from reserving more CPU resources than needed. The VPA monitors the resources that workloads are actually using and adjusts the resource requirements so capacity is available to other workloads. The VPA also maintains the ratios between limits and requests specified in the initial container configuration.
3838

3939
[NOTE]
4040
====

0 commit comments

Comments
 (0)