You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: configuring/configuring-log-forwarding.adoc
+14-3Lines changed: 14 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -111,10 +111,21 @@ The order of filterRefs matters, as they are applied sequentially. Earlier filte
111
111
112
112
Filters are configured in an array under `spec.filters`. They can match incoming log messages based on the value of structured fields and modify or drop them.
113
113
114
-
Administrators can configure the following types of filters:
@@ -135,6 +146,7 @@ On {sts-short}-enabled clusters such as {product-rosa}, {aws-short} roles are pr
135
146
136
147
* xref:../modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc#cluster-logging-collector-log-forward-sts-cloudwatch_configuring-log-forwarding[Forwarding logs to Amazon CloudWatch from STS enabled clusters]
137
148
////
149
+
138
150
* Creating a secret for CloudWatch with an existing {aws-short} role
139
151
140
152
* Forwarding logs to Amazon CloudWatch from STS-enabled clusters
@@ -146,7 +158,6 @@ If you do not have an {aws-short} IAM role pre-configured with trust policies, y
146
158
* xref:../modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc#cluster-logging-collector-log-forward-secret-cloudwatch_configuring-log-forwarding[Creating a secret for AWS CloudWatch with an existing AWS role]
147
159
* xref:../modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc#cluster-logging-collector-log-forward-sts-cloudwatch[Forwarding logs to Amazon CloudWatch from STS enabled clusters]
You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging] in addition to, or instead of, the internal default {ocp-product-title} log store.
9
+
You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging].
10
10
11
-
[NOTE]
11
+
[IMPORTANT]
12
12
====
13
-
Using this feature with Fluentd is not supported.
13
+
Forwarding logs to GCP is not supported on Red{nbsp}Hat OpenShift on AWS.
14
14
====
15
15
16
16
.Prerequisites
17
17
18
-
* {clo}5.5.1 and later
18
+
* {clo}has been installed.
19
19
20
20
.Procedure
21
21
22
22
. Create a secret using your link:https://cloud.google.com/iam/docs/creating-managing-service-account-keys[Google service account key].
. Create a `ClusterLogForwarder` Custom Resource YAML using the template below:
29
30
+
30
31
[source,yaml]
31
32
----
32
-
apiVersion: logging.openshift.io/v1
33
+
apiVersion: observability.openshift.io/v1
33
34
kind: ClusterLogForwarder
34
35
metadata:
35
-
name: <log_forwarder_name> <1>
36
-
namespace: <log_forwarder_namespace> <2>
36
+
name: <log_forwarder_name>
37
+
namespace: openshift-logging
37
38
spec:
38
-
serviceAccountName: <service_account_name> <3>
39
+
serviceAccount:
40
+
name: <service_account_name> #<1>
39
41
outputs:
40
42
- name: gcp-1
41
43
type: googleCloudLogging
42
-
secret:
43
-
name: gcp-secret
44
44
googleCloudLogging:
45
-
projectId : "openshift-gce-devel" <4>
46
-
logId : "app-gcp" <5>
45
+
authentication:
46
+
credentials:
47
+
secretName: gcp-secret
48
+
key: google-application-credentials.json
49
+
id:
50
+
type : project
51
+
value: openshift-gce-devel #<2>
52
+
logId : app-gcp #<3>
47
53
pipelines:
48
54
- name: test-app
49
-
inputRefs: <6>
55
+
inputRefs: #<4>
50
56
- application
51
57
outputRefs:
52
58
- gcp-1
53
59
----
54
-
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
55
-
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
56
-
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
57
-
<4> Set a `projectId`, `folderId`, `organizationId`, or `billingAccountId` field and its corresponding value, depending on where you want to store your logs in the link:https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy[GCP resource hierarchy].
58
-
<5> Set the value to add to the `logName` field of the link:https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry[Log Entry].
59
-
<6> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`.
60
+
61
+
<1> The name of your service account.
62
+
<2> Set a `project`, `folder`, `organization`, or `billingAccount` field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy.
63
+
<3> Set the value to add to the `logName` field of the log entry. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, followed by another field path or a static value. A dynamic value must be encased in single curly brackets `{}` and must end with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes.
64
+
<4> Specify the the names of inputs, defined in the `input.name` field for this pipeline. You can also use the built-in values `application`, `infrastructure`, `audit`.
@@ -16,42 +16,41 @@ To specify the pod labels, you use one or more `matchLabels` key-value pairs. If
16
16
17
17
. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object. In the file, specify the pod labels using simple equality-based selectors under `inputs[].name.application.selector.matchLabels`, as shown in the following example.
18
18
+
19
-
.Example `ClusterLogForwarder` CR YAML file
20
19
[source,yaml]
21
20
----
22
-
apiVersion: logging.openshift.io/v1
21
+
apiVersion: observability.openshift.io/v1
23
22
kind: ClusterLogForwarder
24
23
metadata:
25
-
name: <log_forwarder_name> <1>
26
-
namespace: <log_forwarder_namespace> <2>
24
+
name: <log_forwarder_name>
25
+
namespace: <log_forwarder_namespace>
27
26
spec:
28
-
pipelines:
29
-
- inputRefs: [ myAppLogData ] <3>
30
-
outputRefs: [ default ] <4>
31
-
inputs: <5>
32
-
- name: myAppLogData
33
-
application:
34
-
selector:
35
-
matchLabels: <6>
36
-
environment: production
37
-
app: nginx
38
-
namespaces: <7>
39
-
- app1
40
-
- app2
41
-
outputs: <8>
27
+
serviceAccount:
28
+
name: <service_account_name> #<1>
29
+
outputs:
42
30
- <output_name>
43
-
...
31
+
# ...
32
+
inputs:
33
+
- name: exampleAppLogData #<2>
34
+
type: application #<3>
35
+
application:
36
+
includes: #<4>
37
+
- namespace: app1
38
+
- namespace: app2
39
+
selector:
40
+
matchLabels: #<5>
41
+
environment: production
42
+
app: nginx
43
+
pipelines:
44
+
- inputRefs:
45
+
- exampleAppLogData
46
+
outputRefs:
47
+
# ...
44
48
----
45
-
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
46
-
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
47
-
<3> Specify one or more comma-separated values from `inputs[].name`.
48
-
<4> Specify one or more comma-separated values from `outputs[]`.
49
-
<5> Define a unique `inputs[].name` for each application that has a unique set of pod labels.
50
-
<6> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
51
-
<7> Optional: Specify one or more namespaces.
52
-
<8> Specify one or more outputs to forward your log data to.
53
-
54
-
. Optional: To restrict the gathering of log data to specific namespaces, use `inputs[].name.application.namespaces`, as shown in the preceding example.
49
+
<1> Specify the service account name.
50
+
<2> Specify a name for the input.
51
+
<3> Specify the type as `application` to collect logs from applications.
52
+
<4> Specify the set of namespaces to include when collecting logs.
53
+
<5> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
55
54
56
55
. Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
57
56
.. For each unique combination of pod labels, create an additional `inputs[].name` section similar to the one shown.
@@ -72,4 +71,4 @@ $ oc create -f <file-name>.yaml
72
71
[role="_additional-resources"]
73
72
.Additional resources
74
73
75
-
* For more information on `matchLabels` in Kubernetes, see link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements[Resources that support set-based requirements].
74
+
* link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements[Resources that support set-based requirements].
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
64
-
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
65
-
<3> The name of the output.
66
-
<4> The output type: `elasticsearch`, `fluentdForward`, `syslog`, or `kafka`.
67
-
<5> The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
68
-
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project and have *tls.crt*, *tls.key*, and *ca-bundle.crt* keys that each point to the certificates they represent.
69
-
<7> The configuration for an input to filter application logs from the specified projects.
70
-
<8> If no namespace is specified, logs are collected from all namespaces.
71
-
<9> The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named `forward-to-fluentd-insecure` forwards logs from an input named `my-app-logs` to an output named `fluentd-server-insecure`.
72
-
<10> A list of inputs.
73
-
<11> The name of the output to use.
74
-
<12> Optional: String. One or more labels to add to the logs.
75
-
<13> Configuration for a pipeline to send logs to other log aggregators.
76
-
+
77
-
* Optional: Specify a name for the pipeline.
78
-
* Specify which log types to forward by using the pipeline: `application,``infrastructure`, or `audit`.
79
-
* Specify the name of the output to use when forwarding logs with this pipeline.
80
-
* Optional: Specify the `default` output to forward logs to the default log store.
81
-
* Optional: String. One or more labels to add to the logs.
82
-
<14> Note that application logs from all namespaces are collected when using this configuration.
60
+
<1> Specify the name for the input.
61
+
<2> Specify the type as `application` to collect logs from applications.
62
+
<3> Specify the set of namespaces and containers to include when collecting logs.
63
+
<4> Specify the labels to be applied to log records passing through this pipeline. These labels appear in the `openshift.labels` map in the log record.
64
+
<5> Specify a name for the pipeline.
83
65
84
66
. Apply the `ClusterLogForwarder` CR by running the following command:
0 commit comments