Skip to content

Commit 63c176f

Browse files
authored
Merge pull request #99452 from theashiot/OBSDOCS-2437-6.0
OBSDOCS-2305: PT1: Port the Log collection and forwarding chapter to 6.0
2 parents 72b8c9a + c9743a9 commit 63c176f

File tree

4 files changed

+86
-122
lines changed

4 files changed

+86
-122
lines changed

configuring/configuring-log-forwarding.adoc

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -106,6 +106,11 @@ The order of filterRefs matters, as they are applied sequentially. Earlier filte
106106

107107
Filters are configured in an array under `spec.filters`. They can match incoming log messages based on the value of structured fields and modify or drop them.
108108

109+
include::modules/cluster-logging-collector-log-forwarding-about.adoc[leveloffset=+1]
110+
111+
include::modules/logging-create-clf.adoc[leveloffset=+1]
112+
113+
include::modules/logging-delivery-tuning.adoc[leveloffset=+1]
109114

110115
include::modules/enabling-multi-line-exception-detection.adoc[leveloffset=+2]
111116

@@ -127,3 +132,4 @@ include::modules/input-spec-filter-labels-expressions.adoc[leveloffset=+2]
127132
include::modules/logging-content-filter-prune-records.adoc[leveloffset=+2]
128133
include::modules/input-spec-filter-audit-infrastructure.adoc[leveloffset=+1]
129134
include::modules/input-spec-filter-namespace-container.adoc[leveloffset=+1]
135+

modules/cluster-logging-collector-log-forwarding-about.adoc

Lines changed: 44 additions & 88 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,12 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * configuring/configuring-log-forwarding.adoc
4+
15
:_mod-docs-content-type: CONCEPT
26
[id="cluster-logging-collector-log-forwarding-about_{context}"]
37
= About forwarding logs to third-party systems
48

5-
To send logs to specific endpoints inside and outside your {ocp-product-title} cluster, you specify a combination of _outputs_ and _pipelines_ in a `ClusterLogForwarder` custom resource (CR). You can also use _inputs_ to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes _Secret_ object.
9+
To send logs to specific endpoints inside and outside your {ocp-product-title} cluster, you specify a combination of outputs and pipelines in a `ClusterLogForwarder` custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes `Secret` object.
610

711
_pipeline_:: Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following:
812
+
@@ -31,103 +35,55 @@ Note the following:
3135
3236
* You can use multiple types of outputs in the `ClusterLogForwarder` custom resource (CR) to send logs to servers that support different protocols.
3337
34-
The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the `my-apps-logs` project to the internal Elasticsearch instance.
38+
The following example forwards the audit logs to a secure external Elasticsearch instance.
3539

3640
.Sample log forwarding outputs and pipelines
3741
[source,yaml]
3842
----
39-
apiVersion: "logging.openshift.io/v1"
4043
kind: ClusterLogForwarder
44+
apiVersion: observability.openshift.io/v1
4145
metadata:
42-
name: <log_forwarder_name> <1>
43-
namespace: <log_forwarder_namespace> <2>
46+
name: instance
47+
namespace: openshift-logging
4448
spec:
45-
serviceAccountName: <service_account_name> <3>
49+
serviceAccount:
50+
name: logging-admin
4651
outputs:
47-
- name: elasticsearch-secure <4>
48-
type: "elasticsearch"
49-
url: https://elasticsearch.secure.com:9200
50-
secret:
51-
name: elasticsearch
52-
- name: elasticsearch-insecure <5>
53-
type: "elasticsearch"
54-
url: http://elasticsearch.insecure.com:9200
55-
- name: kafka-app <6>
56-
type: "kafka"
57-
url: tls://kafka.secure.com:9093/app-topic
58-
inputs: <7>
59-
- name: my-app-logs
60-
application:
61-
namespaces:
62-
- my-project
52+
- name: external-es
53+
type: elasticsearch
54+
elasticsearch:
55+
url: 'https://example-elasticsearch-secure.com:9200'
56+
version: 8 # <1>
57+
index: '{.log_type||"undefined"}' # <2>
58+
authentication:
59+
username:
60+
key: username
61+
secretName: es-secret # <3>
62+
password:
63+
key: password
64+
secretName: es-secret # <3>
65+
tls:
66+
ca: # <4>
67+
key: ca-bundle.crt
68+
secretName: es-secret
69+
certificate:
70+
key: tls.crt
71+
secretName: es-secret
72+
key:
73+
key: tls.key
74+
secretName: es-secret
6375
pipelines:
64-
- name: audit-logs <8>
65-
inputRefs:
66-
- audit
67-
outputRefs:
68-
- elasticsearch-secure
69-
- default
70-
labels:
71-
secure: "true" <9>
72-
datacenter: "east"
73-
- name: infrastructure-logs <10>
74-
inputRefs:
75-
- infrastructure
76-
outputRefs:
77-
- elasticsearch-insecure
78-
labels:
79-
datacenter: "west"
80-
- name: my-app <11>
81-
inputRefs:
82-
- my-app-logs
83-
outputRefs:
84-
- default
85-
- inputRefs: <12>
86-
- application
87-
outputRefs:
88-
- kafka-app
89-
labels:
90-
datacenter: "south"
76+
- name: my-logs
77+
inputRefs:
78+
- application
79+
- infrastructure
80+
outputRefs:
81+
- external-es
9182
----
92-
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
93-
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
94-
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
95-
<4> Configuration for an secure Elasticsearch output using a secret with a secure URL.
96-
** A name to describe the output.
97-
** The type of output: `elasticsearch`.
98-
** The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
99-
** The secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project.
100-
<5> Configuration for an insecure Elasticsearch output:
101-
** A name to describe the output.
102-
** The type of output: `elasticsearch`.
103-
** The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix.
104-
<6> Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL:
105-
** A name to describe the output.
106-
** The type of output: `kafka`.
107-
** Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix.
108-
<7> Configuration for an input to filter application logs from the `my-project` namespace.
109-
<8> Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance:
110-
** A name to describe the pipeline.
111-
** The `inputRefs` is the log type, in this example `audit`.
112-
** The `outputRefs` is the name of the output to use, in this example `elasticsearch-secure` to forward to the secure Elasticsearch instance and `default` to forward to the internal Elasticsearch instance.
113-
** Optional: Labels to add to the logs.
114-
<9> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean.
115-
<10> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance.
116-
<11> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance.
117-
** A name to describe the pipeline.
118-
** The `inputRefs` is a specific input: `my-app-logs`.
119-
** The `outputRefs` is `default`.
120-
** Optional: String. One or more labels to add to the logs.
121-
<12> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name:
122-
** The `inputRefs` is the log type, in this example `application`.
123-
** The `outputRefs` is the name of the output to use.
124-
** Optional: String. One or more labels to add to the logs.
125-
126-
[discrete]
127-
[id="cluster-logging-external-fluentd_{context}"]
128-
== Fluentd log handling when the external log aggregator is unavailable
129-
130-
If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. {ocp-product-title} rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods.
83+
<1> Forwarding to an external Elasticsearch of version 8.x or greater requires the `version` field to be specified.
84+
<2> `index` is set to read the field value `.log_type` and falls back to "unknown" if not found.
85+
<3> Use username and password to authenticate to the server
86+
<4> Enable Mutual Transport Layer Security (mTLS) between collector and elasticsearch. The spec identifies the keys and secret to the respective certificates that they represent.
13187

13288
[discrete]
13389
== Supported Authorization Keys

modules/logging-create-clf.adoc

Lines changed: 32 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,54 +1,55 @@
11
// Module included in the following assemblies:
22
//
3-
// * observability/logging/log_collection_forwarding/log-forwarding.adoc
3+
// * configuring/log_collection_forwarding/log-forwarding.adoc
44

55
:_mod-docs-content-type: REFERENCE
66
[id="logging-create-clf_{context}"]
77
= Creating a log forwarder
88

9-
To create a log forwarder, you must create a `ClusterLogForwarder` CR that specifies the log input types that the service account can collect. You can also specify which outputs the logs can be forwarded to. If you are using the multi log forwarder feature, you must also reference the service account in the `ClusterLogForwarder` CR.
10-
11-
If you are using the multi log forwarder feature on your cluster, you can create `ClusterLogForwarder` custom resources (CRs) in any namespace, using any name.
12-
If you are using a legacy implementation, the `ClusterLogForwarder` CR must be named `instance`, and must be created in the `openshift-logging` namespace.
9+
To create a log forwarder, create a `ClusterLogForwarder` custom resource (CR). This CR defines the service account, permissible input log types, pipelines, outputs, and any optional filters.
1310

1411
[IMPORTANT]
1512
====
1613
You need administrator permissions for the namespace where you create the `ClusterLogForwarder` CR.
1714
====
1815

19-
.ClusterLogForwarder resource example
16+
.`ClusterLogForwarder` CR example
2017
[source,yaml]
2118
----
22-
apiVersion: logging.openshift.io/v1
19+
apiVersion: observability.openshift.io/v1
2320
kind: ClusterLogForwarder
24-
metadata:
25-
name: <log_forwarder_name> <1>
26-
namespace: <log_forwarder_namespace> <2>
21+
metadata:
22+
name: <log_forwarder_name>
23+
namespace: <log_forwarder_namespace>
2724
spec:
28-
serviceAccountName: <service_account_name> <3>
25+
outputs: # <1>
26+
- name: <output_name>
27+
type: <output_type>
28+
inputs: # <2>
29+
- name: <input_name>
30+
type: <input_type>
31+
filters: # <3>
32+
- name: <filter_name>
33+
type: <filter_type>
2934
pipelines:
30-
- inputRefs:
31-
- <log_type> <4>
32-
outputRefs:
33-
- <output_name> <5>
34-
outputs:
35-
- name: <output_name> <6>
36-
type: <output_type> <5>
37-
url: <log_output_url> <7>
35+
- inputRefs:
36+
- <input_name> #<4>
37+
- outputRefs:
38+
- <output_name> #<5>
39+
- filterRefs:
40+
- <filter_name> #<6>
41+
serviceAccount:
42+
name: <service_account_name> #<7>
3843
# ...
3944
----
40-
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
41-
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
42-
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
43-
<4> The log types that are collected. The value for this field can be `audit` for audit logs, `application` for application logs, `infrastructure` for infrastructure logs, or a named input that has been defined for your application.
44-
<5> The type of output that you want to forward logs to. The value of this field can be `default`, `loki`, `kafka`, `elasticsearch`, `fluentdForward`, `syslog`, or `cloudwatch`.
45-
+
46-
[NOTE]
47-
====
48-
The `default` output type is not supported in mutli log forwarder implementations.
49-
====
50-
<6> A name for the output that you want to forward logs to.
51-
<7> The URL of the output that you want to forward logs to.
45+
<1> The type of output that you want to forward logs to. The value of this field can be `azureMonitor`, `cloudwatch`, `elasticsearch`, `googleCloudLogging`, `http`, `kafka`, `loki`, `lokistack`, `otlp`, `splunk`, or `syslog`.
46+
<2> A list of inputs. The names `application`, `audit`, and `infrastructure` are reserved for the default inputs.
47+
<3> A list of filters to apply to records going through this pipeline. Each filter is applied in the order defined here. If a filter drops a records, subsequent filters are not applied.
48+
<4> This value should be the same as the input name. You can also use the default input names `application`, `infrastructure`, and `audit`.
49+
<5> This value should be the same as the output name.
50+
<6> This value should be the same as the filter name.
51+
<7> The name of your service account.
52+
5253

5354
// To be followed up on by adding input examples / docs:
5455
////

modules/logging-delivery-tuning.adoc

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
// Module included in the following assemblies:
22
//
3-
// * observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc
3+
// * configuring/configuring-log-forwarding.adoc
44

55
:_mod-docs-content-type: REFERENCE
66
[id="logging-delivery-tuning_{context}"]
77
= Tuning log payloads and delivery
88

9-
In {logging} 5.9 and newer versions, the `tuning` spec in the `ClusterLogForwarder` custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs.
9+
The `tuning` spec in the `ClusterLogForwarder` custom resource (CR) provides a means of configuring your deployment to prioritize either throughput or durability of logs.
1010

1111
For example, if you need to reduce the possibility of log loss when the collector restarts, or you require collected log messages to survive a collector restart to support regulatory mandates, you can tune your deployment to prioritize log durability. If you use outputs that have hard limitations on the size of batches they can receive, you may want to tune your deployment to prioritize log throughput.
1212

@@ -41,11 +41,12 @@ spec:
4141
<1> Specify the delivery mode for log forwarding.
4242
** `AtLeastOnce` delivery means that if the log forwarder crashes or is restarted, any logs that were read before the crash but not sent to their destination are re-sent. It is possible that some logs are duplicated after a crash.
4343
** `AtMostOnce` delivery means that the log forwarder makes no effort to recover logs lost during a crash. This mode gives better throughput, but may result in greater log loss.
44-
<2> Specifying a `compression` configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. The possible values for this configuration are `none` for no compression, `gzip`, `snappy`, `zlib`, or `zstd`. `lz4` compression is also available if you are using a Kafka output. See the table "Supported compression types for tuning outputs" for more information.
44+
<2> Specifying a `compression` configuration causes data to be compressed before it is sent over the network. Note that not all output types support compression, and if the specified compression type is not supported by the output, this results in an error. For more information, see "Supported compression types for tuning outputs".
4545
<3> Specifies a limit for the maximum payload of a single send operation to the output.
4646
<4> Specifies a minimum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (`ms`), seconds (`s`), or minutes (`m`).
4747
<5> Specifies a maximum duration to wait between attempts before retrying delivery after a failure. This value is a string, and can be specified as milliseconds (`ms`), seconds (`s`), or minutes (`m`).
4848

49+
[id="supported-compression-types_{context}"]
4950
.Supported compression types for tuning outputs
5051
[options="header"]
5152
|===

0 commit comments

Comments
 (0)