Skip to content

Commit 64e8d00

Browse files
authored
Merge pull request #96822 from theashiot/OBSDOCS-948
OBSDOCS-948: Follow up changes for http/syslog input docs
2 parents b4002d1 + ca39ed2 commit 64e8d00

8 files changed

+299
-108
lines changed

_topic_maps/_topic_map.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,8 @@ Distros: openshift-logging
3232
Topics:
3333
- Name: Configuring log forwarding
3434
File: configuring-log-forwarding
35+
- Name: Configuring the logging collector
36+
File: cluster-logging-collector
3537
- Name: Configuring the log store
3638
File: configuring-the-log-store
3739
- Name: Configuring LokiStack for OTLP
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
:_mod-docs-content-type: ASSEMBLY
2+
:context: cluster-logging-collector
3+
[id="cluster-logging-collector"]
4+
= Configuring the logging collector
5+
include::_attributes/common-attributes.adoc[]
6+
7+
toc::[]
8+
9+
{logging-title-uc} collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
10+
All supported modifications to the log collector can be performed though the `spec.collection` stanza in the `ClusterLogForwarder` custom resource (CR).
11+
12+
include::modules/creating-logfilesmetricexporter.adoc[leveloffset=+1]
13+
14+
include::modules/cluster-logging-collector-limits.adoc[leveloffset=+1]
15+
16+
[id="cluster-logging-collector-input-receivers"]
17+
== Configuring input receivers
18+
19+
The {clo} deploys a service for each configured input receiver so that clients can write to the collector. This service exposes the port specified for the input receiver. For log forwarder `ClusterLogForwarder` CR deployments, the service name is in the `<clusterlogforwarder_resource_name>-<input_name>` format.
20+
21+
include::modules/configuring-the-collector-to-receive-audit-logs-as-an-http-server.adoc[leveloffset=+2]
22+
include::modules/configuring-the-collector-to-listen-for-connections-as-a-syslog-server.adoc[leveloffset=+2]
23+
24+
//include::modules/cluster-logging-collector-tuning.adoc[leveloffset=+1]

log_collection_forwarding/cluster-logging-collector.adoc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@
44
= Configuring the logging collector
55
include::_attributes/common-attributes.adoc[]
66

7+
//This is a duplicate file and should be removed in the future.
8+
79
toc::[]
810

911
{logging-title-uc} collects operations and application logs from your cluster and enriches the data with Kubernetes pod and project metadata.
@@ -24,7 +26,7 @@ The service name is generated based on the following:
2426
* For multi log forwarder `ClusterLogForwarder` CR deployments, the service name is in the format `<ClusterLogForwarder_CR_name>-<input_name>`. For example, `example-http-receiver`.
2527
* For legacy `ClusterLogForwarder` CR deployments, meaning those named `instance` and located in the `openshift-logging` namespace, the service name is in the format `collector-<input_name>`. For example, `collector-http-receiver`.
2628

27-
include::modules/log-collector-http-server.adoc[leveloffset=+2]
29+
//include::modules/log-collector-http-server.adoc[leveloffset=+2]
2830
//include::modules/log-collector-rsyslog-server.adoc[leveloffset=+2]
2931
// uncomment for 5.9 release
3032

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
11
// Module included in the following assemblies:
22
//
3-
// * observability/logging/cluster-logging-collector.adoc
3+
// * configuring/cluster-logging-collector.adoc
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="cluster-logging-collector-limits_{context}"]
77
= Configure log collector CPU and memory limits
88

9-
The log collector allows for adjustments to both the CPU and memory limits.
9+
You can adjust both the CPU and memory limits for the log collector by editing the `ClusterLogForwarder` custom resource (CR).
1010

1111
.Procedure
1212

13-
* Edit the `ClusterLogging` custom resource (CR) in the `openshift-logging` project:
13+
* Edit the `ClusterLogForwarder` CR in the `openshift-logging` project:
1414
+
1515
[source,terminal]
1616
----
@@ -19,20 +19,20 @@ $ oc -n openshift-logging edit ClusterLogging instance
1919
+
2020
[source,yaml]
2121
----
22-
apiVersion: logging.openshift.io/v1
23-
kind: ClusterLogging
22+
apiVersion: observability.openshift.io/v1
23+
kind: ClusterLogForwarder
2424
metadata:
25-
name: instance
25+
name: <clf_name> #<1>
2626
namespace: openshift-logging
2727
spec:
28-
collection:
29-
type: fluentd
30-
resources:
31-
limits: <1>
32-
memory: 736Mi
28+
collector:
29+
resources: #<2>
3330
requests:
31+
memory: 736Mi
32+
limits:
3433
cpu: 100m
3534
memory: 736Mi
3635
# ...
3736
----
38-
<1> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
37+
<1> Specify a name for the `ClusterLogForwarder` CR.
38+
<2> Specify the CPU and memory limits and requests as needed. The values shown are the default values.
Lines changed: 139 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,139 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * configuring/cluster-logging-collector.adoc
4+
5+
6+
:_newdoc-version: 2.18.4
7+
:_template-generated: 2025-08-05
8+
:_mod-docs-content-type: PROCEDURE
9+
10+
[id="configuring-the-collector-to-listen-for-connections-as-a-syslog-server_{context}"]
11+
= Configuring the collector to listen for connections as a syslog server
12+
13+
You can configure your log collector to collect journal format infrastructure logs by specifying `syslog` as a receiver input in the `ClusterLogForwarder` custom resource (CR).
14+
15+
:feature-name: Syslog receiver input
16+
include::snippets/logging-http-sys-input-support.adoc[]
17+
18+
Prerequisites
19+
20+
* You have administrator permissions.
21+
* You have installed the {oc-first}.
22+
* You have installed the {clo}.
23+
24+
.Procedure
25+
26+
. Grant the `collect-infrastructure-logs` cluster role to the service account by running the following command:
27+
+
28+
.Example binding command
29+
[source,terminal]
30+
----
31+
$ oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
32+
----
33+
34+
. Modify the `ClusterLogForwarder` CR to add configuration for the `syslog` receiver input:
35+
+
36+
.Example `ClusterLogForwarder` CR
37+
[source,yaml]
38+
----
39+
apiVersion: observability.openshift.io/v1
40+
kind: ClusterLogForwarder
41+
metadata:
42+
name: <clusterlogforwarder_name> #<1>
43+
namespace: <namespace>
44+
# ...
45+
spec:
46+
serviceAccount:
47+
name: <service_account_name> # <1>
48+
inputs:
49+
- name: syslog-receiver # <2>
50+
type: receiver
51+
receiver:
52+
type: syslog # <3>
53+
port: 10514 # <4>
54+
outputs:
55+
- name: <output_name>
56+
lokiStack:
57+
authentication:
58+
token:
59+
from: serviceAccount
60+
target:
61+
name: logging-loki
62+
namespace: openshift-logging
63+
tls: # <5>
64+
ca:
65+
key: service-ca.crt
66+
configMapName: openshift-service-ca.crt
67+
type: lokiStack
68+
# ...
69+
pipelines: # <6>
70+
- name: syslog-pipeline
71+
inputRefs:
72+
- syslog-receiver
73+
outputRefs:
74+
- <output_name>
75+
# ...
76+
----
77+
<1> Use the service account that you granted the `collect-infrastructure-logs` permission in the previous step.
78+
<2> Specify a name for your input receiver.
79+
<3> Specify the input receiver type as `syslog`.
80+
<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`.
81+
<5> If TLS configuration is not set, the default certificates will be used. For more information, run the command `oc explain clusterlogforwarders.spec.inputs.receiver.tls`.
82+
<6> Configure a pipeline for your input receiver.
83+
84+
. Apply the changes to the `ClusterLogForwarder` CR by running the following command:
85+
+
86+
[source,terminal]
87+
----
88+
$ oc apply -f <filename>.yaml
89+
----
90+
91+
. Verify that the collector is listening on the service that has a name in the `<clusterlogforwarder_resource_name>-<input_name>` format by running the following command:
92+
+
93+
[source,terminal]
94+
----
95+
$ oc get svc
96+
----
97+
+
98+
.Example output
99+
+
100+
[source,terminal,options="nowrap"]
101+
----
102+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
103+
collector ClusterIP 172.30.85.239 <none> 24231/TCP 33m
104+
collector-syslog-receiver ClusterIP 172.30.216.142 <none> 10514/TCP 2m20s
105+
----
106+
+
107+
In this example output, the service name is `collector-syslog-receiver`.
108+
109+
.Verification
110+
111+
. Extract the certificate authority (CA) certificate file by running the following command:
112+
+
113+
[source,terminal]
114+
----
115+
$ oc extract cm/openshift-service-ca.crt -n <namespace>
116+
----
117+
+
118+
[NOTE]
119+
====
120+
If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.
121+
====
122+
123+
. As an example, use the `curl` command to send logs by running the following command:
124+
+
125+
[source,terminal]
126+
----
127+
$ curl --cacert <openshift_service_ca.crt> collector-syslog-receiver.<namespace>.svc:10514 “test message”
128+
----
129+
+
130+
Replace <openshift_service_ca.crt> with the extracted CA certificate file.
131+
132+
////
133+
. As an example, send logs by running the following command:
134+
+
135+
[source,terminal]
136+
----
137+
$ logger --tcp --server collector-syslog-receiver.<ns>.svc:10514 “test message”
138+
----
139+
////
Lines changed: 111 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,111 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * configuring/cluster-logging-collector.adoc
4+
5+
:_newdoc-version: 2.18.4
6+
:_template-generated: 2025-08-05
7+
:_mod-docs-content-type: PROCEDURE
8+
9+
[id="configuring-the-collector-to-receive-audit-logs-as-an-http-server_{context}"]
10+
= Configuring the collector to receive audit logs as an HTTP server
11+
12+
You can configure your log collector to listen for HTTP connections to only receive audit logs by specifying `http` as a receiver input in the `ClusterLogForwarder` custom resource (CR).
13+
14+
:feature-name: HTTP receiver input
15+
include::snippets/logging-http-sys-input-support.adoc[]
16+
17+
.Prerequisites
18+
19+
* You have administrator permissions.
20+
* You have installed the {oc-first}.
21+
* You have installed the {clo}.
22+
23+
.Procedure
24+
25+
. Modify the `ClusterLogForwarder` CR to add configuration for the `http` receiver input:
26+
+
27+
.Example `ClusterLogForwarder` CR
28+
[source,yaml]
29+
----
30+
apiVersion: observability.openshift.io/v1
31+
kind: ClusterLogForwarder
32+
metadata:
33+
name: <clusterlogforwarder_name> #<1>
34+
namespace: <namespace>
35+
# ...
36+
spec:
37+
serviceAccount:
38+
name: <service_account_name>
39+
inputs:
40+
- name: http-receiver #<2>
41+
type: receiver
42+
receiver:
43+
type: http #<3>
44+
port: 8443 #<4>
45+
http:
46+
format: kubeAPIAudit #<5>
47+
outputs:
48+
- name: <output_name>
49+
type: http
50+
http:
51+
url: <url>
52+
pipelines: #<6>
53+
- name: http-pipeline
54+
inputRefs:
55+
- http-receiver
56+
outputRefs:
57+
- <output_name>
58+
# ...
59+
----
60+
<1> Specify a name for the `ClusterLogForwarder` CR.
61+
<2> Specify a name for your input receiver.
62+
<3> Specify the input receiver type as `http`.
63+
<4> Optional: Specify the port that the input receiver listens on. This must be a value between `1024` and `65535`. The default value is `8443` if this is not specified.
64+
<5> Currently, only the `kube-apiserver` webhook format is supported for `http` input receivers.
65+
<6> Configure a pipeline for your input receiver.
66+
67+
. Apply the changes to the `ClusterLogForwarder` CR by running the following command:
68+
+
69+
[source,terminal]
70+
----
71+
$ oc apply -f <filename>.yaml
72+
----
73+
74+
. Verify that the collector is listening on the service that has a name in the `<clusterlogforwarder_resource_name>-<input_name>` format by running the following command:
75+
+
76+
[source,terminal]
77+
----
78+
$ oc get svc
79+
----
80+
+
81+
.Example output
82+
----
83+
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
84+
collector ClusterIP 172.30.85.239 <none> 24231/TCP 3m6s
85+
collector-http-receiver ClusterIP 172.30.205.160 <none> 8443/TCP 3m6s
86+
----
87+
+
88+
In the example, the service name is `collector-http-receiver`.
89+
90+
.Verification
91+
92+
. Extract the certificate authority (CA) certificate file by running the following command:
93+
+
94+
[source,terminal]
95+
----
96+
$ oc extract cm/openshift-service-ca.crt -n <namespace>
97+
----
98+
+
99+
[NOTE]
100+
====
101+
If the CA in the cluster where the collectors are running changes, you must extract the CA certificate file again.
102+
====
103+
104+
. As an example, use the `curl` command to send logs by running the following command:
105+
+
106+
[source,terminal]
107+
----
108+
$ curl --cacert <openshift_service_ca.crt> https://collector-http-receiver.<namespace>.svc:8443 -XPOST -d '{"<prefix>":"<message>"}'
109+
----
110+
+
111+
Replace <openshift_service_ca.crt> with the extracted CA certificate file.

modules/creating-logfilesmetricexporter.adoc

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,17 @@
11
// Module included in the following assemblies:
22
//
3-
// * observability/logging/log_collection_forwarding/cluster-logging-collector.adoc
3+
// * configuring/cluster-logging-collector.adoc
44

55
:_mod-docs-content-type: PROCEDURE
66
[id="creating-logfilesmetricexporter_{context}"]
77
= Creating a LogFileMetricExporter resource
88

9-
In {logging} version 5.8 and newer versions, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers.
9+
You must manually create a `LogFileMetricExporter` custom resource (CR) to generate metrics from the logs produced by running containers, because it is not deployed with the collector by default.
1010

11-
If you do not create the `LogFileMetricExporter` CR, you may see a *No datapoints found* message in the {ocp-product-title} web console dashboard for *Produced Logs*.
11+
[NOTE]
12+
====
13+
If you do not create the `LogFileMetricExporter` CR, you might see a *No datapoints found* message in the {ocp-product-title} web console dashboard for the *Produced Logs* field.
14+
====
1215

1316
.Prerequisites
1417

@@ -53,8 +56,6 @@ $ oc apply -f <filename>.yaml
5356

5457
.Verification
5558

56-
A `logfilesmetricexporter` pod runs concurrently with a `collector` pod on each node.
57-
5859
* Verify that the `logfilesmetricexporter` pods are running in the namespace where you have created the `LogFileMetricExporter` CR, by running the following command and observing the output:
5960
+
6061
[source,terminal]
@@ -69,3 +70,5 @@ NAME READY STATUS RESTARTS AGE
6970
logfilesmetricexporter-9qbjj 1/1 Running 0 2m46s
7071
logfilesmetricexporter-cbc4v 1/1 Running 0 2m46s
7172
----
73+
+
74+
A `logfilesmetricexporter` pod runs concurrently with a `collector` pod on each node.

0 commit comments

Comments
 (0)