You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -77,15 +70,13 @@ You will need to provide the token contained in the file when creating a job.
77
70
78
71
:::note[Note]
79
72
80
-
81
73
When using Sumo Logic, you may find it helpful to have [Live Tail](https://help.sumologic.com/05Search/Live-Tail/About-Live-Tail) open to see the challenge file as soon as it is uploaded.
82
74
83
-
84
75
:::
85
76
86
77
## Destination
87
78
88
-
You can specify your cloud service provider destination via the required **destination\_conf** parameter.
79
+
You can specify your cloud service provider destination via the required **destination_conf** parameter.
89
80
90
81
:::note[Note]
91
82
@@ -101,44 +92,45 @@ The `destination_conf` parameter must follow this format:
101
92
Supported schemes are listed below, each tailored to specific providers such as
102
93
R2, S3, etc. Additionally, generic use cases like `https` are also covered:
103
94
104
-
*`r2`,
105
-
*`gs`,
106
-
*`s3`,
107
-
*`sumo`,
108
-
*`https`,
109
-
*`azure`,
110
-
*`splunk`,
111
-
*`datadog`.
95
+
-`r2`,
96
+
-`gs`,
97
+
-`s3`,
98
+
-`sumo`,
99
+
-`https`,
100
+
-`azure`,
101
+
-`splunk`,
102
+
-`sentinelone`,
103
+
-`datadog`.
112
104
113
105
The `destination-address` should generally be provided by the destination
114
106
provider. However, for certain providers, we require the `destination-address`
115
107
to follow a specific format:
116
108
117
-
***Cloudflare R2** (scheme `r2`): bucket path + account ID + R2 access key ID + R2 secret access key; for example: `r2://<BUCKET_PATH>?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>`
118
-
***AWS S3** (scheme `s3`): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: `s3://bucket/[dir]?region=<REGION>[&sse=AES256]`
119
-
***Datadog** (scheme `datadog`): Datadog endpoint URL + Datadog API key + optional parameters; for example: `datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>`
***Microsoft Azure** (scheme `azure`): service-level SAS URL with `https` replaced by `azure` + optional directory added before query string; for example: `azure://<BLOB_CONTAINER_PATH>/[dir]?<QUERY_STRING>`
122
-
***New Relic** (use scheme `https`): New Relic endpoint URL which is `https://log-api.newrelic.com/log/v1` for US or `https://log-api.eu.newrelic.com/log/v1` for EU + a license key + a format; for example: for US `"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"` and for EU `"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`
123
-
***Splunk** (scheme `splunk`): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: `splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>`
124
-
***Sumo Logic** (scheme `sumo`): HTTP source address URL with `https` replaced by `sumo`; for example: `sumo://<SUMO_ENDPOINT_URL>/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>`
109
+
-**Cloudflare R2** (scheme `r2`): bucket path + account ID + R2 access key ID + R2 secret access key; for example: `r2://<BUCKET_PATH>?account-id=<ACCOUNT_ID>&access-key-id=<R2_ACCESS_KEY_ID>&secret-access-key=<R2_SECRET_ACCESS_KEY>`
110
+
-**AWS S3** (scheme `s3`): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: `s3://bucket/[dir]?region=<REGION>[&sse=AES256]`
111
+
-**Datadog** (scheme `datadog`): Datadog endpoint URL + Datadog API key + optional parameters; for example: `datadog://<DATADOG_ENDPOINT_URL>?header_DD-API-KEY=<DATADOG_API_KEY>&ddsource=cloudflare&service=<SERVICE>&host=<HOST>&ddtags=<TAGS>`
-**Microsoft Azure** (scheme `azure`): service-level SAS URL with `https` replaced by `azure` + optional directory added before query string; for example: `azure://<BLOB_CONTAINER_PATH>/[dir]?<QUERY_STRING>`
114
+
-**New Relic** (use scheme `https`): New Relic endpoint URL which is `https://log-api.newrelic.com/log/v1` for US or `https://log-api.eu.newrelic.com/log/v1` for EU + a license key + a format; for example: for US `"https://log-api.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"` and for EU `"https://log-api.eu.newrelic.com/log/v1?Api-Key=<NR_LICENSE_KEY>&format=cloudflare"`
115
+
-**Splunk** (scheme `splunk`): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: `splunk://<SPLUNK_ENDPOINT_URL>?channel=<SPLUNK_CHANNEL_ID>&insecure-skip-verify=<INSECURE_SKIP_VERIFY>&sourcetype=<SOURCE_TYPE>&header_Authorization=<SPLUNK_AUTH_TOKEN>`
116
+
-**Sumo Logic** (scheme `sumo`): HTTP source address URL with `https` replaced by `sumo`; for example: `sumo://<SUMO_ENDPOINT_URL>/receiver/v1/http/<UNIQUE_HTTP_COLLECTOR_CODE>`
For **R2**, **S3**, **Google Cloud Storage**, and **Azure**, you can organize logs into daily subdirectories by including the special placeholder `{DATE}` in the URL path. This placeholder will automatically be replaced with the date in the `YYYYMMDD` format (for example, `20180523`).
@@ -173,10 +165,8 @@ The kind parameter (optional) is used to differentiate between Logpush and Edge
173
165
174
166
:::note[Note]
175
167
176
-
177
168
The kind parameter cannot be used to update existing Logpush jobs. You can only specify the kind parameter when creating a new job.
178
169
179
-
180
170
:::
181
171
182
172
<APIRequest
@@ -196,19 +186,19 @@ The kind parameter cannot be used to update existing Logpush jobs. You can only
196
186
"EdgeResponseBytes",
197
187
"EdgeResponseStatus",
198
188
"EdgeStartTimestamp",
199
-
"RayID"
189
+
"RayID",
200
190
],
201
-
timestamp_format: "rfc3339"
191
+
timestamp_format: "rfc3339",
202
192
},
203
-
kind: "edge"
193
+
kind: "edge",
204
194
}}
205
195
/>
206
196
207
197
## Options
208
198
209
-
Logpull\_options has been replaced with Custom Log Formatting output\_options. Please refer to the [Log Output Options](/logs/logpush/logpush-job/log-output-options/) documentation for instructions on configuring these options and updating your existing jobs to use these options.
199
+
Logpull_options has been replaced with Custom Log Formatting output_options. Please refer to the [Log Output Options](/logs/logpush/logpush-job/log-output-options/) documentation for instructions on configuring these options and updating your existing jobs to use these options.
210
200
211
-
If you are still using logpull\_options, here are the options that you can customize:
201
+
If you are still using logpull_options, here are the options that you can customize:
212
202
213
203
1.**Fields** (optional): Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the currently available fields. The list of fields is also accessible directly from the API: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields`. Default fields: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields/default`.
214
204
2.**Timestamp format** (optional): The format in which timestamp fields will be returned. Value options: `unixnano` (nanoseconds unit - default), `unix` (seconds unit), `rfc3339` (seconds unit).
@@ -219,14 +209,15 @@ If you are still using logpull\_options, here are the options that you can custo
219
209
The **CVE-2021-44228** parameter can only be set through the API at this time. Updating your Logpush job through the dashboard will set this option to false.
220
210
:::
221
211
222
-
To check if the selected **logpull\_options** are valid:
212
+
To check if the selected **logpull_options** are valid:
@@ -256,9 +247,9 @@ Value can range from `0.0` (exclusive) to `1.0` (inclusive). `sample=0.1` means
256
247
257
248
These parameters can be used to gain control of batch size in the case that a destination has specific requirements. Files will be sent based on whichever parameter is hit first. If these options are not set, the system uses our internal defaults of 30s, 100k records, or the destinations globally defined limits.
258
249
259
-
1.**max\_upload\_bytes** (optional): The maximum uncompressed file size of a batch of logs. This setting value must be between 5 MB and 1 GB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.
260
-
2.**max\_upload\_records** (optional): The maximum number of log lines per batch. This setting must be between 1,000 and 1,000,000 lines. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.
261
-
3.**max\_upload\_interval\_seconds** (optional): The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this.
250
+
1.**max_upload_bytes** (optional): The maximum uncompressed file size of a batch of logs. This setting value must be between 5 MB and 1 GB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.
251
+
2.**max_upload_records** (optional): The maximum number of log lines per batch. This setting must be between 1,000 and 1,000,000 lines. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.
252
+
3.**max_upload_interval_seconds** (optional): The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this.
The HTTP Event Collector (HEC) is a reliable method to send log data to SentinelOne Singularity Data Lake. Cloudflare Logpush supports pushing logs directly to SentinelOne HEC via the Cloudflare dashboard or API.
14
+
15
+
## Manage via the Cloudflare dashboard
16
+
17
+
<Renderfile="enable-logpush-job" />
18
+
19
+
4. In **Select a destination**, choose **SentinelOne**.
20
+
21
+
5. Enter or select the following destination information:
22
+
-**SentinelOne HEC URL**
23
+
-**Auth Token** - Event Collector token.
24
+
-**Source Type** - For example, `marketplace-cloudflare-latest`.
25
+
26
+
When you are done entering the destination details, select **Continue**.
27
+
28
+
6. Select the dataset to push to the storage service.
29
+
30
+
7. In the next step, you need to configure your logpush job:
31
+
- Enter the **Job name**.
32
+
- Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.
33
+
- In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
34
+
35
+
8. In **Advanced Options**, you can:
36
+
- Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).
37
+
- Select a [sampling rate](/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.
38
+
- Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
39
+
40
+
9. Select **Submit** once you are done configuring your logpush job.
41
+
42
+
## Manage via API
43
+
44
+
To set up a SentinelOne Logpush job:
45
+
46
+
1. Create a job with the appropriate endpoint URL and authentication parameters.
47
+
2. Enable the job to begin pushing logs.
48
+
49
+
:::note
50
+
Unlike configuring Logpush jobs for AWS S3, GCS, or Azure, there is no ownership challenge when configuring Logpush to SentinelOne.
51
+
:::
52
+
53
+
<Renderfile="enable-read-permissions" />
54
+
55
+
### 1. Create a job
56
+
57
+
To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:
58
+
59
+
-**name** (optional) - Use your domain name as the job name.
60
+
-**destination_conf** - A log destination consisting of an endpoint URL, source type, authorization header in the string format below.
61
+
-**\<SENTINELONE_ENDPOINT_URL>**: The SentinelOne raw HTTP Event Collector URL with port. For example: `sentinelone://ingest.us1.sentinelone.net/services/collector/raw`. Cloudflare expects the SentinelOne endpoint to be `/services/collector/raw` while configuring and setting up the Logpush job.
62
+
-`<SENTINELONE_AUTH_TOKEN>`: The SentinelOne authorization token that is URL-encoded. For example: `Bearer 0e6d94e8c-5792-4ad1-be3c-29bcaee0197d`.
63
+
-`<SOURCE_TYPE>`: The SentinelOne source type. For example: `marketplace-cloudflare-latest`.
-**dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
70
+
71
+
-**output_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](/logs/logpush/logpush-job/log-output-options/). For timestamp, Cloudflare recommends using `timestamps=rfc3339`.
To enable a job, make a `PUT` request to the Logpush jobs endpoint. Use the job ID returned from the previous step in the URL and send `{"enabled": true}` in the request body.
0 commit comments