diff --git a/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx b/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx
index 13d16da43f7184a..04c0bceda12805c 100644
--- a/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx
+++ b/src/content/docs/logs/logpush/logpush-job/api-configuration.mdx
@@ -3,7 +3,6 @@ pcx_content_type: concept
title: API configuration
sidebar:
order: 2
-
---
import { APIRequest } from "~/components";
@@ -16,10 +15,8 @@ You can locate `{zone_id}` and `{account_id}` arguments based on the [Find zone
The `{job_id}` argument is numeric, like 123456.
The `{dataset_id}` argument indicates the log category (such as `http_requests` or `audit_logs`).
-
-
| Operation | Description | API |
-| --------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| --------- | ------------------------------------------- | ---------------------------------------------------------------------------------------------- |
| `POST` | Create job | [Documentation](/api/resources/logpush/subresources/jobs/methods/create/) |
| `GET` | Retrieve job details | [Documentation](/api/resources/logpush/subresources/datasets/subresources/jobs/methods/get/) |
| `GET` | Retrieve all jobs for all datasets | [Documentation](/api/resources/logpush/subresources/jobs/methods/list/) |
@@ -32,17 +29,13 @@ The `{dataset_id}` argument indicates the log category (such as `http_requests`
| `POST` | Validate ownership challenge | [Documentation](/api/resources/logpush/subresources/ownership/methods/validate/) |
| `POST` | Validate log options | [Documentation](/api/resources/logpush/subresources/validate/methods/origin/) |
-
For concrete examples, refer to the tutorials in [Logpush examples](/logs/logpush/examples/).
## Connecting
The Logpush API requires credentials like any other Cloudflare API.
-
+
## Ownership
@@ -77,15 +70,13 @@ You will need to provide the token contained in the file when creating a job.
:::note[Note]
-
When using Sumo Logic, you may find it helpful to have [Live Tail](https://help.sumologic.com/05Search/Live-Tail/About-Live-Tail) open to see the challenge file as soon as it is uploaded.
-
:::
## Destination
-You can specify your cloud service provider destination via the required **destination\_conf** parameter.
+You can specify your cloud service provider destination via the required **destination_conf** parameter.
:::note[Note]
@@ -101,44 +92,45 @@ The `destination_conf` parameter must follow this format:
Supported schemes are listed below, each tailored to specific providers such as
R2, S3, etc. Additionally, generic use cases like `https` are also covered:
-* `r2`,
-* `gs`,
-* `s3`,
-* `sumo`,
-* `https`,
-* `azure`,
-* `splunk`,
-* `datadog`.
+- `r2`,
+- `gs`,
+- `s3`,
+- `sumo`,
+- `https`,
+- `azure`,
+- `splunk`,
+- `sentinelone`,
+- `datadog`.
The `destination-address` should generally be provided by the destination
provider. However, for certain providers, we require the `destination-address`
to follow a specific format:
-* **Cloudflare R2** (scheme `r2`): bucket path + account ID + R2 access key ID + R2 secret access key; for example: `r2://?account-id=&access-key-id=&secret-access-key=`
-* **AWS S3** (scheme `s3`): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: `s3://bucket/[dir]?region=[&sse=AES256]`
-* **Datadog** (scheme `datadog`): Datadog endpoint URL + Datadog API key + optional parameters; for example: `datadog://?header_DD-API-KEY=&ddsource=cloudflare&service=&host=&ddtags=`
-* **Google Cloud Storage** (scheme `gs`): bucket + optional directory; for example: `gs://bucket/[dir]`
-* **Microsoft Azure** (scheme `azure`): service-level SAS URL with `https` replaced by `azure` + optional directory added before query string; for example: `azure:///[dir]?`
-* **New Relic** (use scheme `https`): New Relic endpoint URL which is `https://log-api.newrelic.com/log/v1` for US or `https://log-api.eu.newrelic.com/log/v1` for EU + a license key + a format; for example: for US `"https://log-api.newrelic.com/log/v1?Api-Key=&format=cloudflare"` and for EU `"https://log-api.eu.newrelic.com/log/v1?Api-Key=&format=cloudflare"`
-* **Splunk** (scheme `splunk`): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: `splunk://?channel=&insecure-skip-verify=&sourcetype=&header_Authorization=`
-* **Sumo Logic** (scheme `sumo`): HTTP source address URL with `https` replaced by `sumo`; for example: `sumo:///receiver/v1/http/`
+- **Cloudflare R2** (scheme `r2`): bucket path + account ID + R2 access key ID + R2 secret access key; for example: `r2://?account-id=&access-key-id=&secret-access-key=`
+- **AWS S3** (scheme `s3`): bucket + optional directory + region + optional encryption parameter (if required by your policy); for example: `s3://bucket/[dir]?region=[&sse=AES256]`
+- **Datadog** (scheme `datadog`): Datadog endpoint URL + Datadog API key + optional parameters; for example: `datadog://?header_DD-API-KEY=&ddsource=cloudflare&service=&host=&ddtags=`
+- **Google Cloud Storage** (scheme `gs`): bucket + optional directory; for example: `gs://bucket/[dir]`
+- **Microsoft Azure** (scheme `azure`): service-level SAS URL with `https` replaced by `azure` + optional directory added before query string; for example: `azure:///[dir]?`
+- **New Relic** (use scheme `https`): New Relic endpoint URL which is `https://log-api.newrelic.com/log/v1` for US or `https://log-api.eu.newrelic.com/log/v1` for EU + a license key + a format; for example: for US `"https://log-api.newrelic.com/log/v1?Api-Key=&format=cloudflare"` and for EU `"https://log-api.eu.newrelic.com/log/v1?Api-Key=&format=cloudflare"`
+- **Splunk** (scheme `splunk`): Splunk endpoint URL + Splunk channel ID + insecure-skip-verify flag + Splunk sourcetype + Splunk authorization token; for example: `splunk://?channel=&insecure-skip-verify=&sourcetype=&header_Authorization=`
+- **Sumo Logic** (scheme `sumo`): HTTP source address URL with `https` replaced by `sumo`; for example: `sumo:///receiver/v1/http/`
+- **SentinelOne** (scheme `sentinelone`): SentinelOne endpoint URL + SentinelOne sourcetype + SentinelOne authorization token; for example: `sentinelone://?sourcetype=&header_Authorization=`
For **R2**, **S3**, **Google Cloud Storage**, and **Azure**, you can organize logs into daily subdirectories by including the special placeholder `{DATE}` in the URL path. This placeholder will automatically be replaced with the date in the `YYYYMMDD` format (for example, `20180523`).
For example:
-* `s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256`
-* `azure://myblobcontainer/logs/{DATE}?[QueryString]`
+- `s3://mybucket/logs/{DATE}?region=us-east-1&sse=AES256`
+- `azure://myblobcontainer/logs/{DATE}?[QueryString]`
This approach is useful when you want your logs grouped by day.
-
For more information on the value for your cloud storage provider, consult the following conventions:
-* [AWS S3 CLI](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) (S3Uri path argument type)
-* [Google Cloud Storage CLI](https://cloud.google.com/storage/docs/gsutil) (Syntax for accessing resources)
-* [Microsoft Azure Shared Access Signature](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
-* [Sumo Logic HTTP Source](https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source)
+- [AWS S3 CLI](https://docs.aws.amazon.com/cli/latest/reference/s3/index.html) (S3Uri path argument type)
+- [Google Cloud Storage CLI](https://cloud.google.com/storage/docs/gsutil) (Syntax for accessing resources)
+- [Microsoft Azure Shared Access Signature](https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview)
+- [Sumo Logic HTTP Source](https://help.sumologic.com/03Send-Data/Sources/02Sources-for-Hosted-Collectors/HTTP-Source)
To check if a destination is already in use:
@@ -173,10 +165,8 @@ The kind parameter (optional) is used to differentiate between Logpush and Edge
:::note[Note]
-
The kind parameter cannot be used to update existing Logpush jobs. You can only specify the kind parameter when creating a new job.
-
:::
## Options
-Logpull\_options has been replaced with Custom Log Formatting output\_options. Please refer to the [Log Output Options](/logs/logpush/logpush-job/log-output-options/) documentation for instructions on configuring these options and updating your existing jobs to use these options.
+Logpull_options has been replaced with Custom Log Formatting output_options. Please refer to the [Log Output Options](/logs/logpush/logpush-job/log-output-options/) documentation for instructions on configuring these options and updating your existing jobs to use these options.
-If you are still using logpull\_options, here are the options that you can customize:
+If you are still using logpull_options, here are the options that you can customize:
1. **Fields** (optional): Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the currently available fields. The list of fields is also accessible directly from the API: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields`. Default fields: `https://api.cloudflare.com/client/v4/zones/{zone_id}/logpush/datasets/{dataset_id}/fields/default`.
2. **Timestamp format** (optional): The format in which timestamp fields will be returned. Value options: `unixnano` (nanoseconds unit - default), `unix` (seconds unit), `rfc3339` (seconds unit).
@@ -219,14 +209,15 @@ If you are still using logpull\_options, here are the options that you can custo
The **CVE-2021-44228** parameter can only be set through the API at this time. Updating your Logpush job through the dashboard will set this option to false.
:::
-To check if the selected **logpull\_options** are valid:
+To check if the selected **logpull_options** are valid:
@@ -256,9 +247,9 @@ Value can range from `0.0` (exclusive) to `1.0` (inclusive). `sample=0.1` means
These parameters can be used to gain control of batch size in the case that a destination has specific requirements. Files will be sent based on whichever parameter is hit first. If these options are not set, the system uses our internal defaults of 30s, 100k records, or the destinations globally defined limits.
-1. **max\_upload\_bytes** (optional): The maximum uncompressed file size of a batch of logs. This setting value must be between 5 MB and 1 GB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.
-2. **max\_upload\_records** (optional): The maximum number of log lines per batch. This setting must be between 1,000 and 1,000,000 lines. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.
-3. **max\_upload\_interval\_seconds** (optional): The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this.
+1. **max_upload_bytes** (optional): The maximum uncompressed file size of a batch of logs. This setting value must be between 5 MB and 1 GB. Note that you cannot set a minimum file size; this means that log files may be much smaller than this batch size.
+2. **max_upload_records** (optional): The maximum number of log lines per batch. This setting must be between 1,000 and 1,000,000 lines. Note that you cannot specify a minimum number of log lines per batch; this means that log files may contain many fewer lines than this.
+3. **max_upload_interval_seconds** (optional): The maximum interval in seconds for log batches. This setting must be between 30 and 300 seconds. Note that you cannot specify a minimum interval for log batches; this means that log files may be sent in shorter intervals than this.
## Custom fields
diff --git a/src/content/docs/logs/logpush/logpush-job/enable-destinations/sentinelone.mdx b/src/content/docs/logs/logpush/logpush-job/enable-destinations/sentinelone.mdx
new file mode 100644
index 000000000000000..194253fae334ccb
--- /dev/null
+++ b/src/content/docs/logs/logpush/logpush-job/enable-destinations/sentinelone.mdx
@@ -0,0 +1,162 @@
+---
+title: Enable SentinelOne
+pcx_content_type: how-to
+sidebar:
+ order: 64
+head:
+ - tag: title
+ content: Enable Logpush to SentinelOne
+---
+
+import { Render, APIRequest } from "~/components";
+
+The HTTP Event Collector (HEC) is a reliable method to send log data to SentinelOne Singularity Data Lake. Cloudflare Logpush supports pushing logs directly to SentinelOne HEC via the Cloudflare dashboard or API.
+
+## Manage via the Cloudflare dashboard
+
+
+
+4. In **Select a destination**, choose **SentinelOne**.
+
+5. Enter or select the following destination information:
+ - **SentinelOne HEC URL**
+ - **Auth Token** - Event Collector token.
+ - **Source Type** - For example, `marketplace-cloudflare-latest`.
+
+When you are done entering the destination details, select **Continue**.
+
+6. Select the dataset to push to the storage service.
+
+7. In the next step, you need to configure your logpush job:
+ - Enter the **Job name**.
+ - Under **If logs match**, you can select the events to include and/or remove from your logs. Refer to [Filters](/logs/logpush/logpush-job/filters/) for more information. Not all datasets have this option available.
+ - In **Send the following fields**, you can choose to either push all logs to your storage destination or selectively choose which logs you want to push.
+
+8. In **Advanced Options**, you can:
+ - Choose the format of timestamp fields in your logs (`RFC3339`(default),`Unix`, or `UnixNano`).
+ - Select a [sampling rate](/logs/logpush/logpush-job/api-configuration/#sampling-rate) for your logs or push a randomly-sampled percentage of logs.
+ - Enable redaction for `CVE-2021-44228`. This option will replace every occurrence of `${` with `x{`.
+
+9. Select **Submit** once you are done configuring your logpush job.
+
+## Manage via API
+
+To set up a SentinelOne Logpush job:
+
+1. Create a job with the appropriate endpoint URL and authentication parameters.
+2. Enable the job to begin pushing logs.
+
+:::note
+Unlike configuring Logpush jobs for AWS S3, GCS, or Azure, there is no ownership challenge when configuring Logpush to SentinelOne.
+:::
+
+
+
+### 1. Create a job
+
+To create a job, make a `POST` request to the Logpush jobs endpoint with the following fields:
+
+- **name** (optional) - Use your domain name as the job name.
+- **destination_conf** - A log destination consisting of an endpoint URL, source type, authorization header in the string format below.
+ - **SENTINELONE_ENDPOINT_URL**: The SentinelOne raw HTTP Event Collector URL with port. For example: `sentinelone://ingest.us1.sentinelone.net/services/collector/raw`. Cloudflare expects the SentinelOne endpoint to be `/services/collector/raw` while configuring and setting up the Logpush job.
+ - **SENTINELONE_AUTH_TOKEN**: The SentinelOne authorization token that is URL-encoded. For example: `Bearer 0e6d94e8c-5792-4ad1-be3c-29bcaee0197d`.
+ - **SOURCE_TYPE**: The SentinelOne source type. For example: `marketplace-cloudflare-latest`.
+
+```bash
+"https://?sourcetype=&header_Authorization="
+```
+
+- **dataset** - The category of logs you want to receive. Refer to [Datasets](/logs/logpush/logpush-job/datasets/) for the full list of supported datasets.
+
+- **output_options** (optional) - To configure fields, sample rate, and timestamp format, refer to [Log Output Options](/logs/logpush/logpush-job/log-output-options/). For timestamp, Cloudflare recommends using `timestamps=rfc3339`.
+
+Example request using cURL:
+
+",
+ destination_conf:
+ "sentinelone://?sourcetype=&header_Authorization=",
+ output_options: {
+ field_names: [
+ "ClientIP",
+ "ClientRequestHost",
+ "ClientRequestMethod",
+ "ClientRequestURI",
+ "EdgeEndTimestamp",
+ "EdgeResponseBytes",
+ "EdgeResponseStatus",
+ "EdgeStartTimestamp",
+ "RayID",
+ ],
+ timestamp_format: "rfc3339",
+ },
+ dataset: "http_requests",
+ }}
+/>
+
+Response:
+
+```json
+{
+ "errors": [],
+ "messages": [],
+ "result": {
+ "id": ,
+ "dataset": "http_requests",
+ "enabled": false,
+ "name": "",
+ "output_options": {
+ "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],
+ "timestamp_format": "rfc3339"
+ },
+ "destination_conf": "sentinelone://?sourcetype=&header_Authorization=",
+ "last_complete": null,
+ "last_error": null,
+ "error_message": null
+ },
+ "success": true
+}
+```
+
+### 2. Enable (update) a job
+
+To enable a job, make a `PUT` request to the Logpush jobs endpoint. Use the job ID returned from the previous step in the URL and send `{"enabled": true}` in the request body.
+
+Example request using cURL:
+
+
+
+Response:
+
+```json
+{
+ "errors": [],
+ "messages": [],
+ "result": {
+ "id": ,
+ "dataset": "http_requests",
+ "enabled": true,
+ "name": "",
+ "output_options": {
+ "field_names": ["ClientIP", "ClientRequestHost", "ClientRequestMethod", "ClientRequestURI", "EdgeEndTimestamp","EdgeResponseBytes", "EdgeResponseStatus", "EdgeStartTimestamp", "RayID"],
+ "timestamp_format": "rfc3339"
+ },
+ "destination_conf": "sentinelone://?sourcetype=&header_Authorization=",
+ "last_complete": null,
+ "last_error": null,
+ "error_message": null
+ },
+ "success": true
+}
+```
+
+Refer to the [Logpush FAQ](/logs/faq/logpush/) for troubleshooting information.