|
| 1 | +// Module included in the following assemblies: |
| 2 | +// |
| 3 | +// * configuring/configuring-log-forwarding.adoc |
| 4 | + |
1 | 5 | :_mod-docs-content-type: CONCEPT |
2 | 6 | [id="cluster-logging-collector-log-forwarding-about_{context}"] |
3 | 7 | = About forwarding logs to third-party systems |
4 | 8 |
|
5 | | -To send logs to specific endpoints inside and outside your {ocp-product-title} cluster, you specify a combination of _outputs_ and _pipelines_ in a `ClusterLogForwarder` custom resource (CR). You can also use _inputs_ to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes _Secret_ object. |
| 9 | +To send logs to specific endpoints inside and outside your {ocp-product-title} cluster, you specify a combination of outputs and pipelines in a `ClusterLogForwarder` custom resource (CR). You can also use inputs to forward the application logs associated with a specific project to an endpoint. Authentication is provided by a Kubernetes `Secret` object. |
6 | 10 |
|
7 | 11 | _pipeline_:: Defines simple routing from one log type to one or more outputs, or which logs you want to send. The log types are one of the following: |
8 | 12 | + |
@@ -31,103 +35,55 @@ Note the following: |
31 | 35 |
|
32 | 36 | * You can use multiple types of outputs in the `ClusterLogForwarder` custom resource (CR) to send logs to servers that support different protocols. |
33 | 37 |
|
34 | | -The following example forwards the audit logs to a secure external Elasticsearch instance, the infrastructure logs to an insecure external Elasticsearch instance, the application logs to a Kafka broker, and the application logs from the `my-apps-logs` project to the internal Elasticsearch instance. |
| 38 | +The following example forwards the audit logs to a secure external Elasticsearch instance. |
35 | 39 |
|
36 | 40 | .Sample log forwarding outputs and pipelines |
37 | 41 | [source,yaml] |
38 | 42 | ---- |
39 | | -apiVersion: "logging.openshift.io/v1" |
40 | 43 | kind: ClusterLogForwarder |
| 44 | +apiVersion: observability.openshift.io/v1 |
41 | 45 | metadata: |
42 | | - name: <log_forwarder_name> <1> |
43 | | - namespace: <log_forwarder_namespace> <2> |
| 46 | + name: instance |
| 47 | + namespace: openshift-logging |
44 | 48 | spec: |
45 | | - serviceAccountName: <service_account_name> <3> |
| 49 | + serviceAccount: |
| 50 | + name: logging-admin |
46 | 51 | outputs: |
47 | | - - name: elasticsearch-secure <4> |
48 | | - type: "elasticsearch" |
49 | | - url: https://elasticsearch.secure.com:9200 |
50 | | - secret: |
51 | | - name: elasticsearch |
52 | | - - name: elasticsearch-insecure <5> |
53 | | - type: "elasticsearch" |
54 | | - url: http://elasticsearch.insecure.com:9200 |
55 | | - - name: kafka-app <6> |
56 | | - type: "kafka" |
57 | | - url: tls://kafka.secure.com:9093/app-topic |
58 | | - inputs: <7> |
59 | | - - name: my-app-logs |
60 | | - application: |
61 | | - namespaces: |
62 | | - - my-project |
| 52 | + - name: external-es |
| 53 | + type: elasticsearch |
| 54 | + elasticsearch: |
| 55 | + url: 'https://example-elasticsearch-secure.com:9200' |
| 56 | + version: 8 # <1> |
| 57 | + index: '{.log_type||"undefined"}' # <2> |
| 58 | + authentication: |
| 59 | + username: |
| 60 | + key: username |
| 61 | + secretName: es-secret # <3> |
| 62 | + password: |
| 63 | + key: password |
| 64 | + secretName: es-secret # <3> |
| 65 | + tls: |
| 66 | + ca: # <4> |
| 67 | + key: ca-bundle.crt |
| 68 | + secretName: es-secret |
| 69 | + certificate: |
| 70 | + key: tls.crt |
| 71 | + secretName: es-secret |
| 72 | + key: |
| 73 | + key: tls.key |
| 74 | + secretName: es-secret |
63 | 75 | pipelines: |
64 | | - - name: audit-logs <8> |
65 | | - inputRefs: |
66 | | - - audit |
67 | | - outputRefs: |
68 | | - - elasticsearch-secure |
69 | | - - default |
70 | | - labels: |
71 | | - secure: "true" <9> |
72 | | - datacenter: "east" |
73 | | - - name: infrastructure-logs <10> |
74 | | - inputRefs: |
75 | | - - infrastructure |
76 | | - outputRefs: |
77 | | - - elasticsearch-insecure |
78 | | - labels: |
79 | | - datacenter: "west" |
80 | | - - name: my-app <11> |
81 | | - inputRefs: |
82 | | - - my-app-logs |
83 | | - outputRefs: |
84 | | - - default |
85 | | - - inputRefs: <12> |
86 | | - - application |
87 | | - outputRefs: |
88 | | - - kafka-app |
89 | | - labels: |
90 | | - datacenter: "south" |
| 76 | + - name: my-logs |
| 77 | + inputRefs: |
| 78 | + - application |
| 79 | + - infrastructure |
| 80 | + outputRefs: |
| 81 | + - external-es |
91 | 82 | ---- |
92 | | -<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name. |
93 | | -<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace. |
94 | | -<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace. |
95 | | -<4> Configuration for an secure Elasticsearch output using a secret with a secure URL. |
96 | | -** A name to describe the output. |
97 | | -** The type of output: `elasticsearch`. |
98 | | -** The secure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. |
99 | | -** The secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project. |
100 | | -<5> Configuration for an insecure Elasticsearch output: |
101 | | -** A name to describe the output. |
102 | | -** The type of output: `elasticsearch`. |
103 | | -** The insecure URL and port of the Elasticsearch instance as a valid absolute URL, including the prefix. |
104 | | -<6> Configuration for a Kafka output using a client-authenticated TLS communication over a secure URL: |
105 | | -** A name to describe the output. |
106 | | -** The type of output: `kafka`. |
107 | | -** Specify the URL and port of the Kafka broker as a valid absolute URL, including the prefix. |
108 | | -<7> Configuration for an input to filter application logs from the `my-project` namespace. |
109 | | -<8> Configuration for a pipeline to send audit logs to the secure external Elasticsearch instance: |
110 | | -** A name to describe the pipeline. |
111 | | -** The `inputRefs` is the log type, in this example `audit`. |
112 | | -** The `outputRefs` is the name of the output to use, in this example `elasticsearch-secure` to forward to the secure Elasticsearch instance and `default` to forward to the internal Elasticsearch instance. |
113 | | -** Optional: Labels to add to the logs. |
114 | | -<9> Optional: String. One or more labels to add to the logs. Quote values like "true" so they are recognized as string values, not as a boolean. |
115 | | -<10> Configuration for a pipeline to send infrastructure logs to the insecure external Elasticsearch instance. |
116 | | -<11> Configuration for a pipeline to send logs from the `my-project` project to the internal Elasticsearch instance. |
117 | | -** A name to describe the pipeline. |
118 | | -** The `inputRefs` is a specific input: `my-app-logs`. |
119 | | -** The `outputRefs` is `default`. |
120 | | -** Optional: String. One or more labels to add to the logs. |
121 | | -<12> Configuration for a pipeline to send logs to the Kafka broker, with no pipeline name: |
122 | | -** The `inputRefs` is the log type, in this example `application`. |
123 | | -** The `outputRefs` is the name of the output to use. |
124 | | -** Optional: String. One or more labels to add to the logs. |
125 | | - |
126 | | -[discrete] |
127 | | -[id="cluster-logging-external-fluentd_{context}"] |
128 | | -== Fluentd log handling when the external log aggregator is unavailable |
129 | | - |
130 | | -If your external logging aggregator becomes unavailable and cannot receive logs, Fluentd continues to collect logs and stores them in a buffer. When the log aggregator becomes available, log forwarding resumes, including the buffered logs. If the buffer fills completely, Fluentd stops collecting logs. {ocp-product-title} rotates the logs and deletes them. You cannot adjust the buffer size or add a persistent volume claim (PVC) to the Fluentd daemon set or pods. |
| 83 | +<1> Forwarding to an external Elasticsearch of version 8.x or greater requires the `version` field to be specified. |
| 84 | +<2> `index` is set to read the field value `.log_type` and falls back to "unknown" if not found. |
| 85 | +<3> Use username and password to authenticate to the server |
| 86 | +<4> Enable Mutual Transport Layer Security (mTLS) between collector and elasticsearch. The spec identifies the keys and secret to the respective certificates that they represent. |
131 | 87 |
|
132 | 88 | [discrete] |
133 | 89 | == Supported Authorization Keys |
|
0 commit comments