Skip to content

Commit 9416b6f

Browse files
authored
Merge pull request #101033 from skopacz1/OSDOCS-16572_3
OSDOCS-16572: third batch of rec visibility changes
2 parents b778196 + 68559c8 commit 9416b6f

5 files changed

+24
-5
lines changed

modules/images-create-guide-openshift.adoc

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,10 @@ For images that are intended to run application code provided by a third party,
7373

7474
Users of your image are able to configure it without having to create a downstream image based on your image. This means that the runtime configuration is handled using environment variables. For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file.
7575

76+
[NOTE]
77+
====
7678
It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a container image registry.
79+
====
7780

7881
Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image.
7982

modules/monitoring-configuring-persistent-storage.adoc

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,16 @@ Run cluster monitoring with persistent storage to gain the following benefits:
1111
* Protect your metrics and alerting data from data loss by storing them in a persistent volume (PV). As a result, they can survive pods being restarted or recreated.
1212
* Avoid getting duplicate notifications and losing silences for alerts when the Alertmanager pods are restarted.
1313
14-
For production environments, it is highly recommended to configure persistent storage.
15-
1614
[IMPORTANT]
1715
====
1816
In multi-node clusters, you must configure persistent storage for Prometheus, Alertmanager, and Thanos Ruler to ensure high availability.
1917
====
2018

19+
[NOTE]
20+
====
21+
For production environments, it is highly recommended to configure persistent storage.
22+
====
23+
2124
[id="persistent-storage-prerequisites_{context}"]
2225
== Persistent storage prerequisites
2326

modules/network-observability-dependency-network-observability-operator.adoc

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,9 @@ You can optionally integrate the Network Observability Operator with other compo
1010

1111
{loki-op}:: You can use Loki as the backend to store all collected flows with a maximal level of details. It is recommended to use the Red Hat supported {loki-op} to install Loki. You can also choose to use network observability without Loki, but you need to consider some factors. For more information, see "Network observability without Loki".
1212

13-
AMQ Streams Operator:: Kafka provides scalability, resiliency and high availability in the {product-title} cluster for large scale deployments. If you choose to use Kafka, it is recommended to use Red Hat supported AMQ Streams Operator.
13+
AMQ Streams Operator:: Kafka provides scalability, resiliency and high availability in the {product-title} cluster for large scale deployments.
14+
+
15+
[NOTE]
16+
====
17+
If you choose to use Kafka, it is recommended to use Red Hat supported AMQ Streams Operator.
18+
====

modules/nodes-cluster-overcommit-node-resources.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,11 @@
1111
To provide more reliable scheduling and minimize node resource overcommitment,
1212
each node can reserve a portion of its resources for use by system daemons
1313
that are required to run on your node for your cluster to function.
14-
In particular, it is recommended that you reserve resources for incompressible resources such as memory.
14+
15+
[NOTE]
16+
====
17+
It is recommended that you reserve resources for incompressible resources such as memory.
18+
====
1519

1620
.Procedure
1721

modules/nodes-nodes-resources-configuring-about.adoc

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,11 @@ The node enforces resource constraints by using a new cgroup hierarchy that enfo
5555

5656
Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in `system-reserved`.
5757

58-
Enforcing `system-reserved` limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce `system-reserved` only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer.
58+
[NOTE]
59+
====
60+
Enforcing `system-reserved` limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer.
61+
The recommendation is to enforce `system-reserved` only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer.
62+
====
5963

6064
[id="allocate-eviction-thresholds_{context}"]
6165
== Understanding Eviction Thresholds

0 commit comments

Comments
 (0)