Skip to content

Commit 3d6aa00

Browse files
maycmleecswattrtrieu
authored
[DOCS-11765] Add OP Splunk HEC Distribution of OTel and DDOT (#32704)
* add info * small edits * Apply suggestions from code review * apply suggestions * Update content/en/observability_pipelines/sources/opentelemetry.md * apply suggestions * apply suggestions * Update content/en/observability_pipelines/sources/opentelemetry.md * Apply suggestions from code review Co-authored-by: cecilia saixue wat-kim <cecilia.watt@datadoghq.com> * Apply suggestions from code review Co-authored-by: Rosa Trieu <107086888+rtrieu@users.noreply.github.com> * Update content/en/observability_pipelines/sources/opentelemetry.md --------- Co-authored-by: cecilia saixue wat-kim <cecilia.watt@datadoghq.com> Co-authored-by: Rosa Trieu <107086888+rtrieu@users.noreply.github.com>
1 parent aef0c6f commit 3d6aa00

File tree

3 files changed

+80
-5
lines changed

3 files changed

+80
-5
lines changed

content/en/observability_pipelines/sources/datadog_agent.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@ disable_toc: false
55

66
Use Observability Pipelines' Datadog Agent source to receive logs from the Datadog Agent. Select and set up this source when you [set up a pipeline][1].
77

8+
**Note**: If you are using the Datadog Distribution of OpenTelemetry (DDOT) Collector, you must [use the OpenTelemetry source to send logs to Observability Pipelines][4].
9+
810
## Prerequisites
911

1012
{{% observability_pipelines/prerequisites/datadog_agent %}}
@@ -38,4 +40,5 @@ Use the Agent configuration file or the Agent Helm chart values file to connect
3840

3941
[1]: /observability_pipelines/configuration/set_up_pipelines/
4042
[2]: /containers/docker/log/?tab=containerinstallation#linux
41-
[3]: /containers/guide/container-discovery-management/?tab=helm#setting-environment-variables
43+
[3]: /containers/guide/container-discovery-management/?tab=helm#setting-environment-variables
44+
[4]: /observability_pipelines/sources/opentelemetry/#send-logs-from-the-datadog-distribution-of-opentelemetry-collector-to-observability-pipelines

content/en/observability_pipelines/sources/opentelemetry.md

Lines changed: 47 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,16 @@
11
---
2-
title: OpenTelemetry
2+
title: OpenTelemetry Source
33
disable_toc: false
44
---
55

66
## Overview
77

88
Use Observability Pipelines' OpenTelemetry (OTel) source to collect logs from your OTel Collector through HTTP or gRPC. Select and set up this source when you set up a pipeline. The information below is configured in the pipelines UI.
99

10+
**Notes**:
11+
- If you are using the Datadog Distribution of OpenTelemetry (DDOT) Collector, use the OpenTelemetry source to [send logs to Observability Pipelines](#send-logs-from-the-datadog-distribution-of-opentelemetry-collector-to-observability-pipelines).
12+
- If you are using the Splunk HEC Distribution of the OpenTelemetry Collector, use the [Splunk HEC source][4] to send logs to Observability Pipelines.
13+
1014
### When to use this source
1115

1216
Common scenarios when you might use this source:
@@ -62,9 +66,48 @@ The Worker exposes the gRPC endpoint on port 4318. This is an example of configu
6266

6367
Based on these example configurations, these are values you enter for the following environment variables:
6468

65-
- HTTP listener address: `worker:4317`
66-
- gRPC listener address: `worker:4318`
69+
- HTTP listener address: `worker:4318`
70+
- gRPC listener address: `worker:4317`
71+
72+
## Send logs from the Datadog Distribution of OpenTelemetry Collector to Observability Pipelines
73+
74+
To send logs from the Datadog Distribution of the OpenTelemetry (DDOT) Collector:
75+
1. Deploy the DDOT Collector using Helm. See [Install the DDOT Collector as a Kubernetes DaemonSet][5] for instructions.
76+
1. [Set up a pipeline][6] on Observability Pipelines using the [OpenTelemetry source](#set-up-the-source-in-the-pipeline-ui).
77+
1. (Optional) Datadog recommends adding an [Edit Fields processor][7] to the pipeline that appends the field `op_otel_ddot:true`.
78+
1. When you install the Worker, for the OpenTelemetry source environment variables:
79+
1. Set your HTTP listener to `0.0.0.0:4318`.
80+
1. Set your gRPC listener to `0.0.0.0:4317`.
81+
1. After you install the Worker and deployed the pipeline, update the OpenTelemetry Collector's [`otel-config.yaml`][9] to include an exporter that sends logs to Observability Pipelines. For example:
82+
```
83+
exporters:
84+
otlphttp:
85+
endpoint: http://opw-observability-pipelines-worker.default.svc.cluster.local:4318
86+
...
87+
service:
88+
pipelines:
89+
logs:
90+
exporters: [otlphttp]
91+
```
92+
1. Redeploy the Datadog Agent with the updated [`otel-config.yaml`][9]. For example, if the Agent is installed in Kubernetes:
93+
```
94+
helm upgrade --install datadog-agent datadog/datadog \
95+
--values ./agent.yaml \
96+
--set-file datadog.otelCollector.config=./otel-config.yaml
97+
```
98+
99+
**Notes**:
100+
- Because DDOT is sending logs to Observability Pipelines, and not the Datadog Agent, the following settings do not work for sending logs from DDOT to Observability Pipelines:
101+
- `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_ENABLED`
102+
- `DD_OBSERVABILITY_PIPELINES_WORKER_LOGS_URL`
103+
- Logs sent from DDOT might have nested objects that prevent Datadog from parsing the logs correctly. To resolve this, Datadog recommends using the [Custom Processor][8] to flatten the nested `resource` object.
67104
68105
[1]: https://opentelemetry.io/docs/collector/
69106
[2]: /observability_pipelines/sources/
70-
[3]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/#bootstrap-options
107+
[3]: /observability_pipelines/configuration/install_the_worker/advanced_worker_configurations/#bootstrap-options
108+
[4]: /observability_pipelines/sources/splunk_hec/#send-logs-from-the-splunk-distributor-of-the-opentelemetry-collector-to-observability-pipelines
109+
[5]: /opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=datadogoperator
110+
[6]: /observability_pipelines/configuration/set_up_pipelines/
111+
[7]: /observability_pipelines/processors/edit_fields#add-field
112+
[8]: /observability_pipelines/processors/custom_processor
113+
[9]: https://docs.datadoghq.com/opentelemetry/setup/ddot_collector/install/kubernetes_daemonset/?tab=helm#configure-the-opentelemetry-collector

content/en/observability_pipelines/sources/splunk_hec.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@ disable_toc: false
55

66
Use Observability Pipelines' Splunk HTTP Event Collector (HEC) source to receive logs from your Splunk HEC. Select and set up this source when you [set up a pipeline][1].
77

8+
**Note**: Use the Splunk HEC source if you want to [send logs from the Splunk Distribution of the OpenTelemetry Collector to Observability Pipelines](#send-logs-from-the-splunk-distribution-of-the-opentelemetry-collector-to-observability-pipelines).
9+
810
## Prerequisites
911

1012
{{% observability_pipelines/prerequisites/splunk_hec %}}
@@ -21,4 +23,31 @@ Select and set up this source when you [set up a pipeline][1]. The information b
2123

2224
{{% observability_pipelines/log_source_configuration/splunk_hec %}}
2325

26+
## Send logs from the Splunk Distribution of the OpenTelemetry Collector to Observability Pipelines
27+
28+
To send logs from the Splunk Distribution of the OpenTelemetry Collector:
29+
30+
1. Install the Splunk OpenTelemetry Collector based on your environment:
31+
- [Kubernetes][2]
32+
- [Linux][3]
33+
1. [Set up a pipeline][4] using the [Splunk HEC source](#set-up-the-source-in-the-pipeline-ui).
34+
1. Configure the Splunk OpenTelemetry Collector:
35+
```bash
36+
cp /etc/otel/collector/splunk-otel-collector.conf.example etc/otel/collector/splunk-otel-collector.conf
37+
```
38+
```bash
39+
# Splunk HEC endpoint URL, if forwarding to Splunk Observability Cloud
40+
# SPLUNK_HEC_URL=https://ingest.us0.signalfx.com/v1/log
41+
# If you're forwarding to a Splunk Enterprise instance running on example.com, with HEC at port 8088:
42+
SPLUNK_HEC_URL=http://<OPW_HOST>:8088/services/collector
43+
```
44+
- `<OPW_HOST>` is the IP or URL of the host (or load balancer) associated with the Observability Pipelines Worker.
45+
- For CloudFormation installs, the `LoadBalancerDNS` CloudFormation output has the correct URL to use.
46+
- For Kubernetes installs, the internal DNS record of the Observability Pipelines Worker service can be used, for example `opw-observability-pipelines-worker.default.svc.cluster.local`.
47+
48+
**Note**: If you are using a firewall, make sure your firewall allows traffic from the Splunk OpenTelemetry Collector to the Worker.
49+
2450
[1]: /observability_pipelines/configuration/set_up_pipelines/
51+
[2]: https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentelemetry-collector/get-started-with-the-splunk-distribution-of-the-opentelemetry-collector/collector-for-kubernetes
52+
[3]: https://help.splunk.com/en/splunk-observability-cloud/manage-data/splunk-distribution-of-the-opentelemetry-collector/get-started-with-the-splunk-distribution-of-the-opentelemetry-collector/collector-for-linux
53+
[4]: /observability_pipelines/configuration/set_up_pipelines

0 commit comments

Comments
 (0)