Skip to content

Commit f20df72

Browse files
author
arti
committed
Add sentinelone as new logpush destination
1 parent 4a2fab8 commit f20df72

File tree

2 files changed

+201
-48
lines changed

2 files changed

+201
-48
lines changed

src/content/docs/logs/logpush/logpush-job/api-configuration.mdx

Lines changed: 39 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,6 @@ pcx_content_type: concept
33
title: API configuration
44
sidebar:
55
order: 2
6-
76
---
87

98
import { APIRequest } from "~/components";
@@ -16,10 +15,8 @@ You can locate `{zone_id}` and `{account_id}` arguments based on the [Find zone
1615
The `{job_id}` argument is numeric, like 123456.
1716
The `{dataset_id}` argument indicates the log category (such as `http_requests` or `audit_logs`).
1817

19-
20-
2118
| Operation | Description | API |
22-
| --------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
19+
| --------- | ------------------------------------------- | ---------------------------------------------------------------------------------------------- |
2320
| `POST` | Create job | [Documentation](/api/resources/logpush/subresources/jobs/methods/create/) |
2421
| `GET` | Retrieve job details | [Documentation](/api/resources/logpush/subresources/datasets/subresources/jobs/methods/get/) |
2522
| `GET` | Retrieve all jobs for all datasets | [Documentation](/api/resources/logpush/subresources/jobs/methods/list/) |
@@ -32,17 +29,13 @@ The `{dataset_id}` argument indicates the log category (such as `http_requests`
3229
| `POST` | Validate ownership challenge | [Documentation](/api/resources/logpush/subresources/ownership/methods/validate/) |
3330
| `POST` | Validate log options | [Documentation](/api/resources/logpush/subresources/validate/methods/origin/) |
3431

35-
3632
For concrete examples, refer to the tutorials in [Logpush examples](/logs/logpush/examples/).
3733

3834
## Connecting
3935

4036
The Logpush API requires credentials like any other Cloudflare API.
4137

42-
<APIRequest
43-
path="/zones/{zone_id}/logpush/jobs"
44-
method="GET"
45-
/>
38+
<APIRequest path="/zones/{zone_id}/logpush/jobs" method="GET" />
4639

4740
## Ownership
4841

@@ -77,15 +70,13 @@ You will need to provide the token contained in the file when creating a job.
7770

7871
:::note[Note]
7972

80-
8173
When using Sumo Logic, you may find it helpful to have [Live Tail](https://help.sumologic.com/05Search/Live-Tail/About-Live-Tail) open to see the challenge file as soon as it is uploaded.
8274

83-
8475
:::
8576

8677
## Destination
8778

88-
You can specify your cloud service provider destination via the required **destination\_conf** parameter.
79+
You can specify your cloud service provider destination via the required **destination_conf** parameter.
8980

9081
:::note[Note]
9182

@@ -101,44 +92,45 @@ The `destination_conf` parameter must follow this format:
10192
Supported schemes are listed below, each tailored to specific providers such as
10293
R2, S3, etc. Additionally, generic use cases like `https` are also covered:
10394

104-
* `r2`,
105-
* `gs`,
106-
* `s3`,
107-
* `sumo`,
108-
* `https`,
109-
* `azure`,
110-
* `splunk`,
111-
* `datadog`.
95+
- `r2`,
96+
- `gs`,
97+
- `s3`,
98+
- `sumo`,
99+
- `https`,
100+
- `azure`,
101+
- `splunk`,
102+
- `sentinelone`,
103+
- `datadog`.
112104

113105
The `destination-address` should generally be provided by the destination
114106
provider. However, for certain providers, we require the `destination-address`
115107
to follow a specific format:
116108

117-
* **Cloudflare R2** (scheme `r2`): bucket path + account ID + R2 access key ID + R2 secret access key; for example: `r2://<BUCKET_PATH>?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>`
118-
* **AWS S3** (scheme `s3`): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: `s3://bucket/[dir]?region=<REGION>[&sse=AES256]`
119-
* **Datadog** (scheme `datadog`): Datadog endpoint URL + Datadog API key + optional parameters; for example: `datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>`
120-
* **Google Cloud Storage** (scheme `gs`): bucket + optional directory; for example: `gs://bucket/[dir]`
121-
* **Microsoft Azure** (scheme `azure`): service-level SAS URL with `https` replaced by `azure` + optional directory added before query string; for example: `azure://<BLOB_CONTAINER_PATH>/[dir]?<QUERY_STRING>`
122-
* **New Relic** (use scheme `https`): New Relic endpoint URL which is `https://log-api.newrelic.com/log/v1` for US or `https://log-api.eu.newrelic.com/log/v1` for EU + a license key + a format; for example: for US `"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"` and for EU `"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`
123-
* **Splunk** (scheme `splunk`): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: `splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>`
124-
* **Sumo Logic** (scheme `sumo`): HTTP source address URL with `https` replaced by `sumo`; for example: `sumo://<SUMO_ENDPOINT_URL>/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>`
109+
- **Cloudflare R2** (scheme `r2`): bucket path + account ID + R2 access key ID + R2 secret access key; for example: `r2://<BUCKET_PATH>?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>`
110+
- **AWS S3** (scheme `s3`): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: `s3://bucket/[dir]?region=<REGION>[&sse=AES256]`
111+
- **Datadog** (scheme `datadog`): Datadog endpoint URL + Datadog API key + optional parameters; for example: `datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>`
112+
- **Google Cloud Storage** (scheme `gs`): bucket + optional directory; for example: `gs://bucket/[dir]`
113+
- **Microsoft Azure** (scheme `azure`): service-level SAS URL with `https` replaced by `azure` + optional directory added before query string; for example: `azure://<BLOB_CONTAINER_PATH>/[dir]?<QUERY_STRING>`
114+
- **New Relic** (use scheme `https`): New Relic endpoint URL which is `https://log-api.newrelic.com/log/v1` for US or `https://log-api.eu.newrelic.com/log/v1` for EU + a license key + a format; for example: for US `"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"` and for EU `"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`
115+
- **Splunk** (scheme `splunk`): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: `splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>`
116+
- **Sumo Logic** (scheme `sumo`): HTTP source address URL with `https` replaced by `sumo`; for example: `sumo://<SUMO_ENDPOINT_URL>/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>`
117+
- **SentinelOne** (scheme `sentinelone`): SentinelOne endpoint URL + SentinelOne sourcetype + SentinelOne authorization token; for example: `sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>`
125118

126119
For **R2**, **S3**, **Google Cloud Storage**, and **Azure**, you can organize logs into daily subdirectories by including the special placeholder `{DATE}` in the URL path. This placeholder will automatically be replaced with the date in the `YYYYMMDD` format (for example, `20180523`).
127120

128121
For example:
129122

130-
* `s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256`
131-
* `azure://myblobcontainer/logs/{DATE}?[QueryString]`
123+
- `s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256`
124+
- `azure://myblobcontainer/logs/{DATE}?[QueryString]`
132125

133126
This approach is useful when you want your logs grouped by day.
134127

135-
136128
For more information on the value for your cloud storage provider, consult the following conventions:
137129

138-
* [AWS S3 CLI](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) (S3Uri path argument type)
139-
* [Google Cloud Storage CLI](https://cloud.google.com/storage/docs/gsutil) (Syntax for accessing resources)
140-
* [Microsoft Azure Shared Access Signature](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
141-
* [Sumo Logic HTTP Source](https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source)
130+
- [AWS S3 CLI](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) (S3Uri path argument type)
131+
- [Google Cloud Storage CLI](https://cloud.google.com/storage/docs/gsutil) (Syntax for accessing resources)
132+
- [Microsoft Azure Shared Access Signature](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
133+
- [Sumo Logic HTTP Source](https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source)
142134

143135
To check if a destination is already in use:
144136

@@ -173,10 +165,8 @@ The kind parameter (optional) is used to differentiate between Logpush and Edge
173165

174166
:::note[Note]
175167

176-
177168
The kind parameter cannot be used to update existing Logpush jobs. You can only specify the kind parameter when creating a new job.
178169

179-
180170
:::
181171

182172
<APIRequest
@@ -196,19 +186,19 @@ The kind parameter cannot be used to update existing Logpush jobs. You can only
196186
"EdgeResponseBytes",
197187
"EdgeResponseStatus",
198188
"EdgeStartTimestamp",
199-
"RayID"
189+
"RayID",
200190
],
201-
timestamp_format: "rfc3339"
191+
timestamp_format: "rfc3339",
202192
},
203-
kind: "edge"
193+
kind: "edge",
204194
}}
205195
/>
206196

207197
## Options
208198

209-
Logpull\_options has been replaced with Custom Log Formatting output\_options. Please refer to the [Log Output Options](/logs/logpush/logpush-job/log-output-options/) documentation for instructions on configuring these options and updating your existing jobs to use these options.
199+
Logpull_options has been replaced with Custom Log Formatting output_options. Please refer to the [Log Output Options](/logs/logpush/logpush-job/log-output-options/) documentation for instructions on configuring these options and updating your existing jobs to use these options.
210200

211-
If you are still using logpull\_options, here are the options that you can customize:
201+
If you are still using logpull_options, here are the options that you can customize:
212202

213203
1. **Fields** (optional): Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the currently available fields. The list of fields is also accessible directly from the API: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields`. Default fields: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields/default`.
214204
2. **Timestamp format** (optional): The format in which timestamp fields will be returned. Value options: `unixnano` (nanoseconds unit - default), `unix` (seconds unit), `rfc3339` (seconds unit).
@@ -219,14 +209,15 @@ If you are still using logpull\_options, here are the options that you can custo
219209
The **CVE-2021-44228** parameter can only be set through the API at this time. Updating your Logpush job through the dashboard will set this option to false.
220210
:::
221211

222-
To check if the selected **logpull\_options** are valid:
212+
To check if the selected **logpull_options** are valid:
223213

224214
<APIRequest
225215
path="/zones/{zone_id}/logpush/validate/origin"
226216
method="POST"
227217
json={{
228-
logpull_options: "fields=RayID,ClientIP,EdgeStartTimestamp&timestamps=rfc3339&CVE-2021-44228=true",
229-
dataset: "http_requests"
218+
logpull_options:
219+
"fields=RayID,ClientIP,EdgeStartTimestamp&timestamps=rfc3339&CVE-2021-44228=true",
220+
dataset: "http_requests",
230221
}}
231222
/>
232223

@@ -256,9 +247,9 @@ Value can range from `0.0` (exclusive) to `1.0` (inclusive). `sample=0.1` means
256247

257248
These parameters can be used to gain control of batch size in the case that a destination has specific requirements. Files will be sent based on whichever parameter is hit first. If these options are not set, the system uses our internal defaults of 30s, 100k records, or the destinations globally defined limits.
258249

259-
1. **max\_upload\_bytes** (optional): The maximum uncompressed file size of a batch of logs. This setting value must be between 5 MB and 1 GB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.
260-
2. **max\_upload\_records** (optional): The maximum number of log lines per batch. This setting must be between 1,000 and 1,000,000 lines. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.
261-
3. **max\_upload\_interval\_seconds** (optional): The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this.
250+
1. **max_upload_bytes** (optional): The maximum uncompressed file size of a batch of logs. This setting value must be between 5 MB and 1 GB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.
251+
2. **max_upload_records** (optional): The maximum number of log lines per batch. This setting must be between 1,000 and 1,000,000 lines. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.
252+
3. **max_upload_interval_seconds** (optional): The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this.
262253

263254
## Custom fields
264255

Lines changed: 162 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,162 @@
1+
---
2+
title: Enable SentinelOne
3+
pcx_content_type: how-to
4+
sidebar:
5+
order: 64
6+
head:
7+
- tag: title
8+
content: Enable Logpush to SentinelOne
9+
---
10+
11+
import { Render, APIRequest } from "~/components";
12+
13+
The HTTP Event Collector (HEC) is a reliable method to send log data to SentinelOne Singularity Data Lake. Cloudflare Logpush supports pushing logs directly to SentinelOne HEC via the Cloudflare dashboard or API.
14+
15+
## Manage via the Cloudflare dashboard
16+
17+
<Render file="enable-logpush-job" />
18+
19+
4. In **Select a destination**, choose **SentinelOne**.
20+
21+
5. Enter or select the following destination information:
22+
- **SentinelOne HEC URL**
23+
- **Auth Token** - Event Collector token.
24+
- **Source Type** - For example, `marketplace-cloudflare-latest`.
25+
26+
When you are done entering the destination details, select **Continue**.
27+
28+
6. Select the dataset to push to the storage service.
29+
30+
7. In the next step, you need to configure your logpush job:
31+
- Enter the **Job name**.
32+
- Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.
33+
- In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
34+
35+
8. In **Advanced Options**, you can:
36+
- Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).
37+
- Select a [sampling rate](/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.
38+
- Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
39+
40+
9. Select **Submit** once you are done configuring your logpush job.
41+
42+
## Manage via API
43+
44+
To set up a SentinelOne Logpush job:
45+
46+
1. Create a job with the appropriate endpoint URL and authentication parameters.
47+
2. Enable the job to begin pushing logs.
48+
49+
:::note
50+
Unlike configuring Logpush jobs for AWS S3, GCS, or Azure, there is no ownership challenge when configuring Logpush to SentinelOne.
51+
:::
52+
53+
<Render file="enable-read-permissions" />
54+
55+
### 1. Create a job
56+
57+
To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:
58+
59+
- **name** (optional) - Use your domain name as the job name.
60+
- **destination_conf** - A log destination consisting of an endpoint URL, source type, authorization header in the string format below.
61+
- **SENTINELONE_ENDPOINT_URL**: The SentinelOne raw HTTP Event Collector URL with port. For example: `sentinelone://ingest.us1.sentinelone.net/services/collector/raw`. Cloudflare expects the SentinelOne endpoint to be `/services/collector/raw` while configuring and setting up the Logpush job.
62+
- **SENTINELONE_AUTH_TOKEN**: The SentinelOne authorization token that is URL-encoded. For example: `Bearer 0e6d94e8c-5792-4ad1-be3c-29bcaee0197d`.
63+
- **SOURCE_TYPE**: The SentinelOne source type. For example: `marketplace-cloudflare-latest`.
64+
65+
```bash
66+
"https://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>"
67+
```
68+
69+
- **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
70+
71+
- **output_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](/logs/logpush/logpush-job/log-output-options/). For timestamp, Cloudflare recommends using `timestamps=rfc3339`.
72+
73+
Example request using cURL:
74+
75+
<APIRequest
76+
path="/zones/{zone_id}/logpush/jobs"
77+
method="POST"
78+
json={{
79+
name: "<DOMAIN_NAME>",
80+
destination_conf:
81+
"sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>",
82+
output_options: {
83+
field_names: [
84+
"ClientIP",
85+
"ClientRequestHost",
86+
"ClientRequestMethod",
87+
"ClientRequestURI",
88+
"EdgeEndTimestamp",
89+
"EdgeResponseBytes",
90+
"EdgeResponseStatus",
91+
"EdgeStartTimestamp",
92+
"RayID",
93+
],
94+
timestamp_format: "rfc3339",
95+
},
96+
dataset: "http_requests",
97+
}}
98+
/>
99+
100+
Response:
101+
102+
```json
103+
{
104+
"errors": [],
105+
"messages": [],
106+
"result": {
107+
"id": <JOB_ID>,
108+
"dataset": "http_requests",
109+
"enabled": false,
110+
"name": "<DOMAIN_NAME>",
111+
"output_options": {
112+
"field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],
113+
"timestamp_format": "rfc3339"
114+
},
115+
"destination_conf": "sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>",
116+
"last_complete": null,
117+
"last_error": null,
118+
"error_message": null
119+
},
120+
"success": true
121+
}
122+
```
123+
124+
### 2. Enable (update) a job
125+
126+
To enable a job, make a `PUT` request to the Logpush jobs endpoint. Use the job ID returned from the previous step in the URL and send `{"enabled": true}` in the request body.
127+
128+
Example request using cURL:
129+
130+
<APIRequest
131+
method="PUT"
132+
path="/zones/{zone_id}/logpush/jobs/{job_id}"
133+
json={{
134+
enabled: true,
135+
}}
136+
/>
137+
138+
Response:
139+
140+
```json
141+
{
142+
"errors": [],
143+
"messages": [],
144+
"result": {
145+
"id": <JOB_ID>,
146+
"dataset": "http_requests",
147+
"enabled": true,
148+
"name": "<DOMAIN_NAME>",
149+
"output_options": {
150+
"field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],
151+
"timestamp_format": "rfc3339"
152+
},
153+
"destination_conf": "sentinelone://<SENTINELONE_ENDPOINT_URL>?sourcetype=<SOURCE_TYPE>&header_Authorization=<SENTINELONE_AUTH_TOKEN>",
154+
"last_complete": null,
155+
"last_error": null,
156+
"error_message": null
157+
},
158+
"success": true
159+
}
160+
```
161+
162+
Refer to the [Logpush FAQ](/logs/faq/logpush/) for troubleshooting information.

0 commit comments

Comments
 (0)