You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
151306: pkg/cli (tsdump): add delta calculation for counter metrics r=arjunmahishi a=arjunmahishi
Introduce a CumulativeToDeltaProcessor to handle delta calculations for counter
metrics in the Datadog writer. This ensures parity between tsdump counters and
cockroach cloud counter.
Changes:
- Added `CumulativeToDeltaProcessor` to `datadogWriter`.
- Updated `dump` method to process counter metrics using the calculator.
- `resolveMetricType` now needs to create the metric type map for both
datadog and `datadog-init` modes
- Added unit tests for delta calculation, reset detection, and cross-batch
persistence.
- The delta processing is gated behind `COCKROACH_TSDUMP_DELTA`. It's enabled when
`COCKROACH_TSDUMP_DELTA=1` is set in the env.
- Increase the default for `--upload-workers` to 75 insted of 50. This
is to compensate for the added overhead due to delta calculation.
This is tested with a 10GB tsdump upload to make sure there are no
4xx from datadog
Epic: None
Jira: CRDB-52597
Release note: None
---
**Datadog**
_No need for using `monotonic_diff`_
<img width="2618" height="2528" alt="image" src="https://github.com/user-attachments/assets/fb038e6a-7911-4c45-a8b9-88c74f0c5119" />
**Same metric on DB console**
<img width="2030" height="844" alt="image" src="https://github.com/user-attachments/assets/0687d139-7a98-4f85-9f82-160a4c5844bf" />
Co-authored-by: Arjun Mahishi <arjun.mahishi@gmail.com>
Copy file name to clipboardExpand all lines: pkg/cli/debug.go
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1617,7 +1617,7 @@ func init() {
1617
1617
f.StringVar(&debugTimeSeriesDumpOpts.userName, "user-name", "", "name of the user to perform datadog upload")
1618
1618
f.StringVar(&debugTimeSeriesDumpOpts.storeToNodeMapYAMLFile, "store-to-node-map-file", "", "yaml file path which contains the mapping of store ID to node ID for datadog upload.")
1619
1619
f.BoolVar(&debugTimeSeriesDumpOpts.dryRun, "dry-run", false, "run in dry-run mode without making any actual uploads")
1620
-
f.IntVar(&debugTimeSeriesDumpOpts.noOfUploadWorkers, "upload-workers", 50, "number of workers to upload the time series data in parallel")
1620
+
f.IntVar(&debugTimeSeriesDumpOpts.noOfUploadWorkers, "upload-workers", 75, "number of workers to upload the time series data in parallel")
1621
1621
f.BoolVar(&debugTimeSeriesDumpOpts.retryFailedRequests, "retry-failed-requests", false, "retry previously failed requests from file")
0 commit comments