Skip to content

Commit 833665a

Browse files
committed
minor adjustments
1 parent b69c684 commit 833665a

10 files changed

+431
-215
lines changed
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
---
2+
sidebar_label: 'ClickPipes - streaming and object storage'
3+
slug: /cloud/reference/billing/clickpipes/streaming-and-object-storage
4+
title: 'ClickPipes for streaming and object storage'
5+
description: 'Overview of billing for streaming and object storage ClickPipes'
6+
---
7+
8+
# ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}
9+
10+
This section outlines the pricing model of ClickPipes for streaming and object storage.
11+
12+
## What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
13+
14+
It consists of two dimensions:
15+
16+
- **Compute**: Price **per unit per hour**.
17+
Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
18+
It applies to all ClickPipes types.
19+
- **Ingested data**: Price **per GB**.
20+
The ingested data rate applies to all streaming ClickPipes
21+
(Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
22+
for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
23+
24+
## What are ClickPipes replicas? {#what-are-clickpipes-replicas}
25+
26+
ClickPipes ingests data from remote data sources via a dedicated infrastructure
27+
that runs and scales independently of the ClickHouse Cloud service.
28+
For this reason, it uses dedicated compute replicas.
29+
30+
## What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
31+
32+
Each ClickPipe defaults to 1 replica that is provided with 512 MiB of RAM and 0.125 vCPU (XS).
33+
This corresponds to **0.0625** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
34+
35+
## What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
36+
37+
- Compute: \$0.20 per unit per hour (\$0.0125 per replica per hour for the default replica size)
38+
- Ingested data: \$0.04 per GB
39+
40+
The price for the Compute dimension depends on the **number** and **size** of replica(s) in a ClickPipe. The default replica size can be adjusted using vertical scaling, and each replica size is priced as follows:
41+
42+
| Replica Size | Compute Units | RAM | vCPU | Price per Hour |
43+
|----------------------------|---------------|---------|--------|----------------|
44+
| Extra Small (XS) (default) | 0.0625 | 512 MiB | 0.125. | $0.0125 |
45+
| Small (S) | 0.125 | 1 GiB | 0.25 | $0.025 |
46+
| Medium (M) | 0.25 | 2 GiB | 0.5 | $0.05 |
47+
| Large (L) | 0.5 | 4 GiB | 1.0 | $0.10 |
48+
| Extra Large (XL) | 1.0 | 8 GiB | 2.0 | $0.20 |
49+
50+
## How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
51+
52+
The following examples assume a single M-sized replica, unless explicitly mentioned.
53+
54+
<table><thead>
55+
<tr>
56+
<th></th>
57+
<th>100 GB over 24h</th>
58+
<th>1 TB over 24h</th>
59+
<th>10 TB over 24h</th>
60+
</tr></thead>
61+
<tbody>
62+
<tr>
63+
<td>Streaming ClickPipe</td>
64+
<td>(0.25 x 0.20 x 24) + (0.04 x 100) = \$5.20</td>
65+
<td>(0.25 x 0.20 x 24) + (0.04 x 1000) = \$41.20</td>
66+
<td>With 4 replicas: <br></br> (0.25 x 0.20 x 24 x 4) + (0.04 x 10000) = \$404.80</td>
67+
</tr>
68+
<tr>
69+
<td>Object Storage ClickPipe $^*$</td>
70+
<td>(0.25 x 0.20 x 24) = \$1.20</td>
71+
<td>(0.25 x 0.20 x 24) = \$1.20</td>
72+
<td>(0.25 x 0.20 x 24) = \$1.20</td>
73+
</tr>
74+
</tbody>
75+
</table>
76+
77+
$^1$ _Only ClickPipes compute for orchestration,
78+
effective data transfer is assumed by the underlying Clickhouse Service_
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
---
2+
sidebar_label: 'ClickPipes - PostgreSQL CDC'
3+
slug: /cloud/reference/billing/clickpipes/postgres-cdc
4+
title: 'ClickPipes for PostgreSQL CDC'
5+
description: 'Overview of billing for PostgreSQL CDC ClickPipes'
6+
---
7+
8+
# ClickPipes for PostgreSQL CDC {#clickpipes-for-postgresql-cdc}
9+
10+
This section outlines the pricing model for our Postgres Change Data Capture (CDC)
11+
connector in ClickPipes. In designing this model, our goal was to keep pricing
12+
highly competitive while staying true to our core vision:
13+
14+
> Making it seamless and
15+
affordable for customers to move data from Postgres to ClickHouse for
16+
real-time analytics.
17+
18+
The connector is over **5x more cost-effective** than external
19+
ETL tools and similar features in other database platforms.
20+
21+
:::note
22+
Pricing will start being metered in monthly bills beginning **September 1st, 2025,**
23+
for all customers (both existing and new) using Postgres CDC ClickPipes. Until
24+
then, usage is free. Customers have a 3-month window starting May 29 (GA announcement)
25+
to review and optimize their costs if needed, although we expect most will not need
26+
to make any changes.
27+
:::
28+
29+
## Pricing dimensions {#pricing-dimensions}
30+
31+
There are two main dimensions to pricing:
32+
33+
1. **Ingested Data**: The raw, uncompressed bytes coming from Postgres and
34+
ingested into ClickHouse.
35+
2. **Compute**: The compute units provisioned per service manage multiple
36+
Postgres CDC ClickPipes and are separate from the compute units used by the
37+
ClickHouse Cloud service. This additional compute is dedicated specifically
38+
to Postgres CDC ClickPipes. Compute is billed at the service level, not per
39+
individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
40+
41+
## Ingested data {#ingested-data}
42+
43+
The Postgres CDC connector operates in two main phases:
44+
45+
- **Initial load / resync**: This captures a full snapshot of Postgres tables
46+
and occurs when a pipe is first created or re-synced.
47+
- **Continuous Replication (CDC)**: Ongoing replication of changes—such as inserts,
48+
updates, deletes, and schema changes—from Postgres to ClickHouse.
49+
50+
In most use cases, continuous replication accounts for over 90% of a ClickPipe
51+
life cycle. Because initial loads involve transferring a large volume of data all
52+
at once, we offer a lower rate for that phase.
53+
54+
| Phase | Cost |
55+
|----------------------------------|--------------|
56+
| **Initial load / resync** | $0.10 per GB |
57+
| **Continuous Replication (CDC)** | $0.20 per GB |
58+
59+
## Compute {#compute}
60+
61+
This dimension covers the compute units provisioned per service just for Postgres
62+
ClickPipes. Compute is shared across all Postgres pipes within a service. **It
63+
is provisioned when the first Postgres pipe is created and deallocated when no
64+
Postgres CDC pipes remain**. The amount of compute provisioned depends on your
65+
organization's tier:
66+
67+
| Tier | Cost |
68+
|------------------------------|-----------------------------------------------|
69+
| **Basic Tier** | 0.5 compute unit per service — $0.10 per hour |
70+
| **Scale or Enterprise Tier** | 1 compute unit per service — $0.20 per hour |
71+
72+
## Example {#example}
73+
74+
Let's say your service is in Scale tier and has the following setup:
75+
76+
- 2 Postgres ClickPipes running continuous replication
77+
- Each pipe ingests 500 GB of data changes (CDC) per month
78+
- When the first pipe is kicked off, the service provisions **1 compute unit under the Scale Tier** for Postgres CDC
79+
80+
### Monthly cost breakdown {#cost-breakdown}
81+
82+
**Ingested Data (CDC)**:
83+
84+
$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
85+
86+
$$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$
87+
88+
**Compute**:
89+
90+
$$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$
91+
92+
:::note
93+
Compute is shared across both pipes
94+
:::
95+
96+
**Total Monthly Cost**:
97+
98+
$$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)