Skip to content

Commit 5a2bc54

Browse files
authored
Merge pull request #94410 from theashiot/config-log-forwarding-2
PT2: Port the Log collection and forwarding chapter to 6.x
2 parents 1b542b1 + ee280a4 commit 5a2bc54

6 files changed

+177
-226
lines changed

configuring/configuring-log-forwarding.adoc

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -111,10 +111,21 @@ The order of filterRefs matters, as they are applied sequentially. Earlier filte
111111

112112
Filters are configured in an array under `spec.filters`. They can match incoming log messages based on the value of structured fields and modify or drop them.
113113

114-
Administrators can configure the following types of filters:
115114

116115
include::modules/enabling-multi-line-exception-detection.adoc[leveloffset=+2]
117-
include::modules/logging-http-forward.adoc[leveloffset=+2]
116+
117+
include::modules/cluster-logging-collector-log-forward-gcp.adoc[leveloffset=+1]
118+
119+
include::modules/logging-forward-splunk.adoc[leveloffset=+1]
120+
121+
include::modules/logging-http-forward.adoc[leveloffset=+1]
122+
123+
include::modules/logging-forwarding-azure.adoc[leveloffset=+1]
124+
125+
include::modules/cluster-logging-collector-log-forward-project.adoc[leveloffset=+1]
126+
127+
include::modules/cluster-logging-collector-log-forward-logs-from-application-pods.adoc[leveloffset=+1]
128+
118129
include::modules/cluster-logging-collector-log-forward-syslog.adoc[leveloffset=+2]
119130

120131

@@ -135,6 +146,7 @@ On {sts-short}-enabled clusters such as {product-rosa}, {aws-short} roles are pr
135146
136147
* xref:../modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc#cluster-logging-collector-log-forward-sts-cloudwatch_configuring-log-forwarding[Forwarding logs to Amazon CloudWatch from STS enabled clusters]
137148
////
149+
138150
* Creating a secret for CloudWatch with an existing {aws-short} role
139151

140152
* Forwarding logs to Amazon CloudWatch from STS-enabled clusters
@@ -146,7 +158,6 @@ If you do not have an {aws-short} IAM role pre-configured with trust policies, y
146158
* xref:../modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc#cluster-logging-collector-log-forward-secret-cloudwatch_configuring-log-forwarding[Creating a secret for AWS CloudWatch with an existing AWS role]
147159
* xref:../modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc#cluster-logging-collector-log-forward-sts-cloudwatch[Forwarding logs to Amazon CloudWatch from STS enabled clusters]
148160
////
149-
150161
include::modules/creating-an-aws-role.adoc[leveloffset=+2]
151162
include::modules/cluster-logging-collector-log-forward-secret-cloudwatch.adoc[leveloffset=+2]
152163
include::modules/cluster-logging-collector-log-forward-sts-cloudwatch.adoc[leveloffset=+2]

modules/cluster-logging-collector-log-forward-gcp.adoc

Lines changed: 26 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -6,59 +6,65 @@
66
[id="cluster-logging-collector-log-forward-gcp_{context}"]
77
= Forwarding logs to Google Cloud Platform (GCP)
88

9-
You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging] in addition to, or instead of, the internal default {ocp-product-title} log store.
9+
You can forward logs to link:https://cloud.google.com/logging/docs/basic-concepts[Google Cloud Logging].
1010

11-
[NOTE]
11+
[IMPORTANT]
1212
====
13-
Using this feature with Fluentd is not supported.
13+
Forwarding logs to GCP is not supported on Red{nbsp}Hat OpenShift on AWS.
1414
====
1515

1616
.Prerequisites
1717

18-
* {clo} 5.5.1 and later
18+
* {clo} has been installed.
1919
2020
.Procedure
2121

2222
. Create a secret using your link:https://cloud.google.com/iam/docs/creating-managing-service-account-keys[Google service account key].
2323
+
2424
[source,terminal,subs="+quotes"]
2525
----
26-
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=_<your_service_account_key_file.json>_
26+
$ oc -n openshift-logging create secret generic gcp-secret --from-file google-application-credentials.json=<your_service_account_key_file.json>
2727
----
28+
2829
. Create a `ClusterLogForwarder` Custom Resource YAML using the template below:
2930
+
3031
[source,yaml]
3132
----
32-
apiVersion: logging.openshift.io/v1
33+
apiVersion: observability.openshift.io/v1
3334
kind: ClusterLogForwarder
3435
metadata:
35-
name: <log_forwarder_name> <1>
36-
namespace: <log_forwarder_namespace> <2>
36+
name: <log_forwarder_name>
37+
namespace: openshift-logging
3738
spec:
38-
serviceAccountName: <service_account_name> <3>
39+
serviceAccount:
40+
name: <service_account_name> #<1>
3941
outputs:
4042
- name: gcp-1
4143
type: googleCloudLogging
42-
secret:
43-
name: gcp-secret
4444
googleCloudLogging:
45-
projectId : "openshift-gce-devel" <4>
46-
logId : "app-gcp" <5>
45+
authentication:
46+
credentials:
47+
secretName: gcp-secret
48+
key: google-application-credentials.json
49+
id:
50+
type : project
51+
value: openshift-gce-devel #<2>
52+
logId : app-gcp #<3>
4753
pipelines:
4854
- name: test-app
49-
inputRefs: <6>
55+
inputRefs: #<4>
5056
- application
5157
outputRefs:
5258
- gcp-1
5359
----
54-
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
55-
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
56-
<3> The name of your service account. The service account is only required in multi log forwarder implementations if the log forwarder is not deployed in the `openshift-logging` namespace.
57-
<4> Set a `projectId`, `folderId`, `organizationId`, or `billingAccountId` field and its corresponding value, depending on where you want to store your logs in the link:https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy[GCP resource hierarchy].
58-
<5> Set the value to add to the `logName` field of the link:https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry[Log Entry].
59-
<6> Specify which log types to forward by using the pipeline: `application`, `infrastructure`, or `audit`.
60+
61+
<1> The name of your service account.
62+
<2> Set a `project`, `folder`, `organization`, or `billingAccount` field and its corresponding value, depending on where you want to store your logs in the GCP resource hierarchy.
63+
<3> Set the value to add to the `logName` field of the log entry. The value can be a combination of static and dynamic values consisting of field paths followed by `||`, followed by another field path or a static value. A dynamic value must be encased in single curly brackets `{}` and must end with a static fallback value separated with `||`. Static values can only contain alphanumeric characters along with dashes, underscores, dots and forward slashes.
64+
<4> Specify the the names of inputs, defined in the `input.name` field for this pipeline. You can also use the built-in values `application`, `infrastructure`, `audit`.
6065

6166
[role="_additional-resources"]
6267
.Additional resources
6368
* link:https://cloud.google.com/billing/docs/concepts[Google Cloud Billing Documentation]
69+
* link:https://cloud.google.com/logging/docs[Cloud Logging documentation] for Google GCP.
6470
* link:https://cloud.google.com/logging/docs/view/logging-query-language[Google Cloud Logging Query Language Documentation]
Lines changed: 30 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
// Module included in the following assemblies:
22
//
3-
// * observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc
3+
// * configuring/configuring-log-forwarding.adoc
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="cluster-logging-collector-log-forward-logs-from-application-pods_{context}"]
@@ -16,42 +16,41 @@ To specify the pod labels, you use one or more `matchLabels` key-value pairs. If
1616

1717
. Create or edit a YAML file that defines the `ClusterLogForwarder` CR object. In the file, specify the pod labels using simple equality-based selectors under `inputs[].name.application.selector.matchLabels`, as shown in the following example.
1818
+
19-
.Example `ClusterLogForwarder` CR YAML file
2019
[source,yaml]
2120
----
22-
apiVersion: logging.openshift.io/v1
21+
apiVersion: observability.openshift.io/v1
2322
kind: ClusterLogForwarder
2423
metadata:
25-
name: <log_forwarder_name> <1>
26-
namespace: <log_forwarder_namespace> <2>
24+
name: <log_forwarder_name>
25+
namespace: <log_forwarder_namespace>
2726
spec:
28-
pipelines:
29-
- inputRefs: [ myAppLogData ] <3>
30-
outputRefs: [ default ] <4>
31-
inputs: <5>
32-
- name: myAppLogData
33-
application:
34-
selector:
35-
matchLabels: <6>
36-
environment: production
37-
app: nginx
38-
namespaces: <7>
39-
- app1
40-
- app2
41-
outputs: <8>
27+
serviceAccount:
28+
name: <service_account_name> #<1>
29+
outputs:
4230
- <output_name>
43-
...
31+
# ...
32+
inputs:
33+
- name: exampleAppLogData #<2>
34+
type: application #<3>
35+
application:
36+
includes: #<4>
37+
- namespace: app1
38+
- namespace: app2
39+
selector:
40+
matchLabels: #<5>
41+
environment: production
42+
app: nginx
43+
pipelines:
44+
- inputRefs:
45+
- exampleAppLogData
46+
outputRefs:
47+
# ...
4448
----
45-
<1> In legacy implementations, the CR name must be `instance`. In multi log forwarder implementations, you can use any name.
46-
<2> In legacy implementations, the CR namespace must be `openshift-logging`. In multi log forwarder implementations, you can use any namespace.
47-
<3> Specify one or more comma-separated values from `inputs[].name`.
48-
<4> Specify one or more comma-separated values from `outputs[]`.
49-
<5> Define a unique `inputs[].name` for each application that has a unique set of pod labels.
50-
<6> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
51-
<7> Optional: Specify one or more namespaces.
52-
<8> Specify one or more outputs to forward your log data to.
53-
54-
. Optional: To restrict the gathering of log data to specific namespaces, use `inputs[].name.application.namespaces`, as shown in the preceding example.
49+
<1> Specify the service account name.
50+
<2> Specify a name for the input.
51+
<3> Specify the type as `application` to collect logs from applications.
52+
<4> Specify the set of namespaces to include when collecting logs.
53+
<5> Specify the key-value pairs of pod labels whose log data you want to gather. You must specify both a key and value, not just a key. To be selected, the pods must match all the key-value pairs.
5554

5655
. Optional: You can send log data from additional applications that have different pod labels to the same pipeline.
5756
.. For each unique combination of pod labels, create an additional `inputs[].name` section similar to the one shown.
@@ -72,4 +71,4 @@ $ oc create -f <file-name>.yaml
7271
[role="_additional-resources"]
7372
.Additional resources
7473

75-
* For more information on `matchLabels` in Kubernetes, see link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements[Resources that support set-based requirements].
74+
* link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements[Resources that support set-based requirements].

modules/cluster-logging-collector-log-forward-project.adoc

Lines changed: 37 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
// Module included in the following assemblies:
22
//
3-
// * observability/logging/log_collection_forwarding/configuring-log-forwarding.adoc
3+
// * configuring/configuring-log-forwarding.adoc
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="cluster-logging-collector-log-forward-project_{context}"]
@@ -15,71 +15,53 @@ To configure forwarding application logs from a project, you must create a `Clus
1515
* You must have a logging server that is configured to receive the logging data using the specified protocol or format.
1616
1717
.Procedure
18-
18+
1919
. Create or edit a YAML file that defines the `ClusterLogForwarder` CR:
2020
+
2121
.Example `ClusterLogForwarder` CR
2222
[source,yaml]
2323
----
24-
apiVersion: logging.openshift.io/v1
24+
apiVersion: observability.openshift.io/v1
2525
kind: ClusterLogForwarder
2626
metadata:
27-
name: instance <1>
28-
namespace: openshift-logging <2>
27+
name: <log_forwarder_name>
28+
namespace: <log_forwarder_namespace>
2929
spec:
30+
serviceAccount:
31+
name: <service_account_name>
3032
outputs:
31-
- name: fluentd-server-secure <3>
32-
type: fluentdForward <4>
33-
url: 'tls://fluentdserver.security.example.com:24224' <5>
34-
secret: <6>
35-
name: fluentd-secret
36-
- name: fluentd-server-insecure
37-
type: fluentdForward
38-
url: 'tcp://fluentdserver.home.example.com:24224'
39-
inputs: <7>
40-
- name: my-app-logs
41-
application:
42-
namespaces:
43-
- my-project <8>
33+
- name: <output_name>
34+
type: <output_type>
35+
inputs:
36+
- name: my-app-logs #<1>
37+
type: application #<2>
38+
application:
39+
includes: #<3>
40+
- namespace: my-project
41+
filters:
42+
- name: my-project-labels
43+
type: openshiftLabels
44+
openshiftLabels: #<4>
45+
project: my-project
46+
- name: cluster-labels
47+
type: openshiftLabels
48+
openshiftLabels:
49+
clusterId: C1234
4450
pipelines:
45-
- name: forward-to-fluentd-insecure <9>
46-
inputRefs: <10>
47-
- my-app-logs
48-
outputRefs: <11>
49-
- fluentd-server-insecure
50-
labels:
51-
project: "my-project" <12>
52-
- name: forward-to-fluentd-secure <13>
53-
inputRefs:
54-
- application <14>
55-
- audit
56-
- infrastructure
57-
outputRefs:
58-
- fluentd-server-secure
59-
- default
60-
labels:
61-
clusterId: "C1234"
51+
- name: <pipeline_name> #<5>
52+
inputRefs:
53+
- my-app-logs
54+
outputRefs:
55+
- <output_name>
56+
filterRefs:
57+
- my-project-labels
58+
- cluster-labels
6259
----
63-
<1> The name of the `ClusterLogForwarder` CR must be `instance`.
64-
<2> The namespace for the `ClusterLogForwarder` CR must be `openshift-logging`.
65-
<3> The name of the output.
66-
<4> The output type: `elasticsearch`, `fluentdForward`, `syslog`, or `kafka`.
67-
<5> The URL and port of the external log aggregator as a valid absolute URL. If the cluster-wide proxy using the CIDR annotation is enabled, the output must be a server name or FQDN, not an IP address.
68-
<6> If using a `tls` prefix, you must specify the name of the secret required by the endpoint for TLS communication. The secret must exist in the `openshift-logging` project and have *tls.crt*, *tls.key*, and *ca-bundle.crt* keys that each point to the certificates they represent.
69-
<7> The configuration for an input to filter application logs from the specified projects.
70-
<8> If no namespace is specified, logs are collected from all namespaces.
71-
<9> The pipeline configuration directs logs from a named input to a named output. In this example, a pipeline named `forward-to-fluentd-insecure` forwards logs from an input named `my-app-logs` to an output named `fluentd-server-insecure`.
72-
<10> A list of inputs.
73-
<11> The name of the output to use.
74-
<12> Optional: String. One or more labels to add to the logs.
75-
<13> Configuration for a pipeline to send logs to other log aggregators.
76-
+
77-
* Optional: Specify a name for the pipeline.
78-
* Specify which log types to forward by using the pipeline: `application,` `infrastructure`, or `audit`.
79-
* Specify the name of the output to use when forwarding logs with this pipeline.
80-
* Optional: Specify the `default` output to forward logs to the default log store.
81-
* Optional: String. One or more labels to add to the logs.
82-
<14> Note that application logs from all namespaces are collected when using this configuration.
60+
<1> Specify the name for the input.
61+
<2> Specify the type as `application` to collect logs from applications.
62+
<3> Specify the set of namespaces and containers to include when collecting logs.
63+
<4> Specify the labels to be applied to log records passing through this pipeline. These labels appear in the `openshift.labels` map in the log record.
64+
<5> Specify a name for the pipeline.
8365

8466
. Apply the `ClusterLogForwarder` CR by running the following command:
8567
+

0 commit comments

Comments
 (0)