Skip to content

Commit a153719

Browse files
authored
Merge branch 'main' into fix_cloud
2 parents 98fe78e + 56d5ed3 commit a153719

File tree

19 files changed

+307
-290
lines changed

19 files changed

+307
-290
lines changed

docs/cloud/features/06_security/02_cloud-access-management/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@ description: 'Cloud Access Management Table Of Contents'
77
| Page | Description |
88
|----------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|
99
| [Overview](/cloud/security/cloud-access-management/overview) | Overview of access control in ClickHouse Cloud |
10-
| [Cloud Authentication](/cloud/security/cloud-authentication) | A guide which explores some good practices for configuring your authentication |
10+
| [Cloud Authentication](/cloud/security/cloud-authentication) | A guide which explores some good practices for configuring your authentication |

docs/cloud/reference/03_billing/01_billing_overview.md

Lines changed: 1 addition & 261 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,6 @@ title: 'Pricing'
55
description: 'Overview page for ClickHouse Cloud pricing'
66
---
77

8-
import ClickPipesFAQ from '../../_snippets/_clickpipes_faq.md'
9-
108
For pricing information, see the [ClickHouse Cloud Pricing](https://clickhouse.com/pricing#pricing-calculator) page.
119
ClickHouse Cloud bills based on the usage of compute, storage, [data transfer](/cloud/manage/network-data-transfer) (egress over the internet and cross-region), and [ClickPipes](/integrations/clickpipes).
1210
To understand what can affect your bill, and ways that you can manage your spend, keep reading.
@@ -360,262 +358,4 @@ However, combining two services in a warehouse and idling one of them helps you
360358

361359
## ClickPipes pricing {#clickpipes-pricing}
362360

363-
### ClickPipes for Postgres CDC {#clickpipes-for-postgres-cdc}
364-
365-
This section outlines the pricing model for our Postgres Change Data Capture (CDC)
366-
connector in ClickPipes. In designing this model, our goal was to keep pricing
367-
highly competitive while staying true to our core vision:
368-
369-
> Making it seamless and
370-
affordable for customers to move data from Postgres to ClickHouse for
371-
real-time analytics.
372-
373-
The connector is over **5x more cost-effective** than external
374-
ETL tools and similar features in other database platforms.
375-
376-
:::note
377-
Pricing will start being metered in monthly bills beginning **September 1st, 2025,**
378-
for all customers (both existing and new) using Postgres CDC ClickPipes. Until
379-
then, usage is free. Customers have a 3-month window starting May 29 (GA announcement)
380-
to review and optimize their costs if needed, although we expect most will not need
381-
to make any changes.
382-
:::
383-
384-
#### Pricing dimensions {#pricing-dimensions}
385-
386-
There are two main dimensions to pricing:
387-
388-
1. **Ingested Data**: The raw, uncompressed bytes coming from Postgres and
389-
ingested into ClickHouse.
390-
2. **Compute**: The compute units provisioned per service manage multiple
391-
Postgres CDC ClickPipes and are separate from the compute units used by the
392-
ClickHouse Cloud service. This additional compute is dedicated specifically
393-
to Postgres CDC ClickPipes. Compute is billed at the service level, not per
394-
individual pipe. Each compute unit includes 2 vCPUs and 8 GB of RAM.
395-
396-
#### Ingested data {#ingested-data}
397-
398-
The Postgres CDC connector operates in two main phases:
399-
400-
- **Initial load / resync**: This captures a full snapshot of Postgres tables
401-
and occurs when a pipe is first created or re-synced.
402-
- **Continuous Replication (CDC)**: Ongoing replication of changes—such as inserts,
403-
updates, deletes, and schema changes—from Postgres to ClickHouse.
404-
405-
In most use cases, continuous replication accounts for over 90% of a ClickPipe
406-
life cycle. Because initial loads involve transferring a large volume of data all
407-
at once, we offer a lower rate for that phase.
408-
409-
| Phase | Cost |
410-
|----------------------------------|--------------|
411-
| **Initial load / resync** | $0.10 per GB |
412-
| **Continuous Replication (CDC)** | $0.20 per GB |
413-
414-
#### Compute {#compute}
415-
416-
This dimension covers the compute units provisioned per service just for Postgres
417-
ClickPipes. Compute is shared across all Postgres pipes within a service. **It
418-
is provisioned when the first Postgres pipe is created and deallocated when no
419-
Postgres CDC pipes remain**. The amount of compute provisioned depends on your
420-
organization's tier:
421-
422-
| Tier | Cost |
423-
|------------------------------|-----------------------------------------------|
424-
| **Basic Tier** | 0.5 compute unit per service — $0.10 per hour |
425-
| **Scale or Enterprise Tier** | 1 compute unit per service — $0.20 per hour |
426-
427-
#### Example {#example}
428-
429-
Let's say your service is in Scale tier and has the following setup:
430-
431-
- 2 Postgres ClickPipes running continuous replication
432-
- Each pipe ingests 500 GB of data changes (CDC) per month
433-
- When the first pipe is kicked off, the service provisions **1 compute unit under the Scale Tier** for Postgres CDC
434-
435-
##### Monthly cost breakdown {#cost-breakdown}
436-
437-
**Ingested Data (CDC)**:
438-
439-
$$ 2 \text{ pipes} \times 500 \text{ GB} = 1,000 \text{ GB per month} $$
440-
441-
$$ 1,000 \text{ GB} \times \$0.20/\text{GB} = \$200 $$
442-
443-
**Compute**:
444-
445-
$$1 \text{ compute unit} \times \$0.20/\text{hr} \times 730 \text{ hours (approximate month)} = \$146$$
446-
447-
:::note
448-
Compute is shared across both pipes
449-
:::
450-
451-
**Total Monthly Cost**:
452-
453-
$$\$200 \text{ (ingest)} + \$146 \text{ (compute)} = \$346$$
454-
455-
### ClickPipes for streaming and object storage {#clickpipes-for-streaming-object-storage}
456-
457-
This section outlines the pricing model of ClickPipes for streaming and object storage.
458-
459-
#### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
460-
461-
It consists of two dimensions:
462-
463-
- **Compute**: Price **per unit per hour**.
464-
Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
465-
It applies to all ClickPipes types.
466-
- **Ingested data**: Price **per GB**.
467-
The ingested data rate applies to all streaming ClickPipes
468-
(Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream, Azure Event Hubs)
469-
for the data transferred via the replica pods. The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
470-
471-
#### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
472-
473-
ClickPipes ingests data from remote data sources via a dedicated infrastructure
474-
that runs and scales independently of the ClickHouse Cloud service.
475-
For this reason, it uses dedicated compute replicas.
476-
477-
#### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
478-
479-
Each ClickPipe defaults to 1 replica that is provided with 512 MiB of RAM and 0.125 vCPU (XS).
480-
This corresponds to **0.0625** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
481-
482-
#### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
483-
484-
- Compute: \$0.20 per unit per hour (\$0.0125 per replica per hour for the default replica size)
485-
- Ingested data: \$0.04 per GB
486-
487-
The price for the Compute dimension depends on the **number** and **size** of replica(s) in a ClickPipe. The default replica size can be adjusted using vertical scaling, and each replica size is priced as follows:
488-
489-
| Replica Size | Compute Units | RAM | vCPU | Price per Hour |
490-
|----------------------------|---------------|---------|--------|----------------|
491-
| Extra Small (XS) (default) | 0.0625 | 512 MiB | 0.125. | $0.0125 |
492-
| Small (S) | 0.125 | 1 GiB | 0.25 | $0.025 |
493-
| Medium (M) | 0.25 | 2 GiB | 0.5 | $0.05 |
494-
| Large (L) | 0.5 | 4 GiB | 1.0 | $0.10 |
495-
| Extra Large (XL) | 1.0 | 8 GiB | 2.0 | $0.20 |
496-
497-
#### How does it look in an illustrative example? {#how-does-it-look-in-an-illustrative-example}
498-
499-
The following examples assume a single M-sized replica, unless explicitly mentioned.
500-
501-
<table><thead>
502-
<tr>
503-
<th></th>
504-
<th>100 GB over 24h</th>
505-
<th>1 TB over 24h</th>
506-
<th>10 TB over 24h</th>
507-
</tr></thead>
508-
<tbody>
509-
<tr>
510-
<td>Streaming ClickPipe</td>
511-
<td>(0.25 x 0.20 x 24) + (0.04 x 100) = \$5.20</td>
512-
<td>(0.25 x 0.20 x 24) + (0.04 x 1000) = \$41.20</td>
513-
<td>With 4 replicas: <br></br> (0.25 x 0.20 x 24 x 4) + (0.04 x 10000) = \$404.80</td>
514-
</tr>
515-
<tr>
516-
<td>Object Storage ClickPipe $^*$</td>
517-
<td>(0.25 x 0.20 x 24) = \$1.20</td>
518-
<td>(0.25 x 0.20 x 24) = \$1.20</td>
519-
<td>(0.25 x 0.20 x 24) = \$1.20</td>
520-
</tr>
521-
</tbody>
522-
</table>
523-
524-
$^1$ _Only ClickPipes compute for orchestration,
525-
effective data transfer is assumed by the underlying Clickhouse Service_
526-
527-
## ClickPipes pricing FAQ {#clickpipes-pricing-faq}
528-
529-
Below, you will find frequently asked questions about CDC ClickPipes and streaming
530-
and object-based storage ClickPipes.
531-
532-
### FAQ for Postgres CDC ClickPipes {#faq-postgres-cdc-clickpipe}
533-
534-
<details>
535-
536-
<summary>Is the ingested data measured in pricing based on compressed or uncompressed size?</summary>
537-
538-
The ingested data is measured as _uncompressed data_ coming from Postgres—both
539-
during the initial load and CDC (via the replication slot). Postgres does not
540-
compress data during transit by default, and ClickPipe processes the raw,
541-
uncompressed bytes.
542-
543-
</details>
544-
545-
<details>
546-
547-
<summary>When will Postgres CDC pricing start appearing on my bills?</summary>
548-
549-
Postgres CDC ClickPipes pricing begins appearing on monthly bills starting
550-
**September 1st, 2025**, for all customers—both existing and new. Until then,
551-
usage is free. Customers have a **3-month window** starting from **May 29**
552-
(the GA announcement date) to review and optimize their usage if needed, although
553-
we expect most won't need to make any changes.
554-
555-
</details>
556-
557-
<details>
558-
559-
<summary>Will I be charged if I pause my pipes?</summary>
560-
561-
No data ingestion charges apply while a pipe is paused, since no data is moved.
562-
However, compute charges still apply—either 0.5 or 1 compute unit—based on your
563-
organization's tier. This is a fixed service-level cost and applies across all
564-
pipes within that service.
565-
566-
</details>
567-
568-
<details>
569-
570-
<summary>How can I estimate my pricing?</summary>
571-
572-
The Overview page in ClickPipes provides metrics for both initial load/resync and
573-
CDC data volumes. You can estimate your Postgres CDC costs using these metrics
574-
in conjunction with the ClickPipes pricing.
575-
576-
</details>
577-
578-
<details>
579-
580-
<summary>Can I scale the compute allocated for Postgres CDC in my service?</summary>
581-
582-
By default, compute scaling is not user-configurable. The provisioned resources
583-
are optimized to handle most customer workloads optimally. If your use case
584-
requires more or less compute, please open a support ticket so we can evaluate
585-
your request.
586-
587-
</details>
588-
589-
<details>
590-
591-
<summary>What is the pricing granularity?</summary>
592-
593-
- **Compute**: Billed per hour. Partial hours are rounded up to the next hour.
594-
- **Ingested Data**: Measured and billed per gigabyte (GB) of uncompressed data.
595-
596-
</details>
597-
598-
<details>
599-
600-
<summary>Can I use my ClickHouse Cloud credits for Postgres CDC via ClickPipes?</summary>
601-
602-
Yes. ClickPipes pricing is part of the unified ClickHouse Cloud pricing. Any
603-
platform credits you have will automatically apply to ClickPipes usage as well.
604-
605-
</details>
606-
607-
<details>
608-
609-
<summary>How much additional cost should I expect from Postgres CDC ClickPipes in my existing monthly ClickHouse Cloud spend?</summary>
610-
611-
The cost varies based on your use case, data volume, and organization tier.
612-
That said, most existing customers see an increase of **0–15%** relative to their
613-
existing monthly ClickHouse Cloud spend post trial. Actual costs may vary
614-
depending on your workload—some workloads involve high data volumes with
615-
lesser processing, while others require more processing with less data.
616-
617-
</details>
618-
619-
### FAQ for streaming and object storage ClickPipes {#faq-streaming-and-object-storage}
620-
621-
<ClickPipesFAQ/>
361+
For information on ClickPipes billing, please see ["ClickPipes billing"](/cloud/reference/billing/clickpipes)

0 commit comments

Comments
 (0)