You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/use-cases/observability/clickstack/event_deltas.md
+24-1Lines changed: 24 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,7 @@ import event_deltas_selected from '@site/static/images/use-cases/observability/e
16
16
import event_deltas_issue from '@site/static/images/use-cases/observability/event_deltas_issue.png';
17
17
import event_deltas_outliers from '@site/static/images/use-cases/observability/event_deltas_outliers.png';
18
18
import event_deltas_separation from '@site/static/images/use-cases/observability/event_deltas_separation.png';
19
+
import event_deltas_customization from '@site/static/images/use-cases/observability/event_deltas_customization.png';
19
20
import event_deltas_inappropriate from '@site/static/images/use-cases/observability/event_deltas_inappropriate.png';
20
21
21
22
Event Deltas in ClickStack are a trace-focused feature that automatically analyzes the properties of traces to uncover what changed when performance regresses. By comparing the latency distributions of normal versus slow traces within a corpus, ClickStack highlights which attributes are most correlated with the difference - whether that's a new deployment version, a specific endpoint, or a particular user ID.
@@ -54,9 +55,31 @@ Conversely, values exclusively associated with inliers can also be interesting.
54
55
55
56
Event Deltas work by issuing two queries: one for the selected outlier area and one for the inlier area. Each query is limited to the appropriate duration and time window. A sample of events from both result sets is then inspected, and columns for which a high concentration of values appears predominantly in the outliers are identified. Columns for which 100% of a value occurs only in the outlier subset are shown first, highlighting the attributes most responsible for the observed differences.
56
57
58
+
## Customizing the graph {#customizing-the-graph}
59
+
60
+
Above the graph, you'll find controls that let you customize how the heatmap is generated. As you adjust these fields, the heatmap updates in real time, allowing you to visualize and compare relationships between any measurable value and its frequency over time.
-**Color (Z Axis)**: `count()` — represents the number of requests over time (X axis)
68
+
69
+
This setup shows latency distribution across time, with color intensity indicating how many events fall within each range.
70
+
71
+
**Adjusting Parameters**
72
+
73
+
You can modify these parameters to explore different dimensions of your data:
74
+
75
+
-**Value**: Controls what is plotted on the Y axis. For example, replace `Duration` with metrics like error rate or response size.
76
+
-**Count**: Controls the color mapping. You can switch from `count()` (number of events per bucket) to other aggregation functions such as `avg()`, `sum()`, `p95()`, or even custom expressions like `countDistinct(field)`.
Event Deltas work best when the analysis is focused on a specific service. Latency across multiple services can vary widely, making it harder to identify the columns and values most responsible for outliers. Before enabling Event Deltas, filter spans to a set where the distribution of latencies is expected to be similar. Target analyzing sets where wide latency variation is unexpected for the most useful insights, avoiding cases where it's the norm (e.g., two different services).
82
+
Event Deltas work best when the analysis is focused on a specific service. Latency across multiple services can vary widely, making it harder to identify the columns and values most responsible for outliers. Before enabling Event Deltas, filter spans to a set where the distribution of latencies is expected to be similar. Target analyzing sets where wide latency variation is unexpected for the most useful insights, avoiding cases where it's the norm (e.g., two different services).
60
83
61
84
When selecting an area, users should aim for subsets where there is a clear distribution of slower versus faster durations, allowing the higher-latency spans to be cleanly isolated for analysis. For example, note the selected area below clearly captures a set of slower spans for analysis.
0 commit comments