Skip to content

Commit 950c1cd

Browse files
authored
Merge branch 'ClickHouse:main' into code_block_imports
2 parents 9924be0 + b85a441 commit 950c1cd

File tree

38 files changed

+1042
-201
lines changed

38 files changed

+1042
-201
lines changed

docs/about-us/adopters.md

Lines changed: 4 additions & 0 deletions
Large diffs are not rendered by default.

docs/cloud/manage/api/api-overview.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,13 @@ consume the ClickHouse Cloud API docs, we offer a JSON-based Swagger endpoint
2525
via https://api.clickhouse.cloud/v1. You can also find the API docs via
2626
the [Swagger UI](https://clickhouse.com/docs/cloud/manage/api/swagger).
2727

28+
:::note
29+
If your organization has been migrated to one of the [new pricing plans](https://clickhouse.com/pricing?plan=scale&provider=aws&region=us-east-1&hours=8&storageCompressed=false), and you use OpenAPI you will be required to remove the `tier` field in the service creation `POST` request.
30+
31+
The `tier` field has been removed from the service object as we no longer have service tiers.
32+
This will affect the objects returned by the `POST`, `GET`, and `PATCH` service requests. Therefore, any code that consumes these APIs may need to be adjusted to handle these changes.
33+
:::
34+
2835
## Rate limits {#rate-limits}
2936

3037
Developers are limited to 100 API keys per organization. Each API key has a
@@ -43,6 +50,17 @@ You can view the Terraform provider docs in the [Terraform registry](https://reg
4350
If you'd like to contribute to the ClickHouse Terraform Provider, you can view
4451
the source [in the GitHub repo](https://github.com/ClickHouse/terraform-provider-clickhouse).
4552

53+
:::note
54+
If your organization has been migrated to one of the [new pricing plans](https://clickhouse.com/pricing?plan=scale&provider=aws&region=us-east-1&hours=8&storageCompressed=false), you will be required to use our [ClickHouse Terraform provider](https://registry.terraform.io/providers/ClickHouse/clickhouse/latest/docs) version 2.0.0 or above. This upgrade is required to handle changes in the `tier` attribute of the service since, after pricing migration, the `tier` field is no longer accepted and references to it should be removed.
55+
56+
You will now also be able to specify the `num_replicas` field as a property of the service resource.
57+
:::
58+
59+
## Terraform and OpenAPI New Pricing: Replica Settings Explained
60+
The number of replicas each service will be created with defaults to 3 for the Scale and Enterprise tiers, while it defaults to 1 for the Basic tier.
61+
For the Scale and the Enterprise tiers it is possible to adjust it by passing a `numReplicas` field in the service creation request.
62+
The value of the `numReplicas` field must be between 2 and 20 for the first service in a warehouse. Services that are created in an existing warehouse can have a number of replicas as low as 1.
63+
4664
## Support {#support}
4765

4866
We recommend visiting [our Slack channel](https://clickhouse.com/slack) first to get quick support. If

docs/cloud/manage/upgrades.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -98,7 +98,9 @@ Specifically, services will:
9898
- Be meant for customers that want additional time to test ClickHouse releases on their non-production environments before the production upgrade. Non-production environments can either get upgrades on the Fast or the Regular release channel for testing and validation.
9999

100100
:::note
101-
You can change release channels at any time. However, in certain cases, the change will only apply to future releases. For example, if your service is already on the Fast release channel and has received the upgrade, switching to a regular or slow release channel will not downgrade your service to a previous version. It will follow the channel specific release schedule for upcoming updates.
101+
You can change release channels at any time. However, in certain cases, the change will only apply to future releases.
102+
- Moving to a faster channel will immediately upgrade your service. i.e. Slow to Regular, Regular to Fast
103+
- Moving to a slower channel will not downgrade your service and keep you on your current version until a newer one is available in that channel. i.e. Regular to Slow, Fast to Regular or Slow
102104
:::
103105

104106
## Scheduled upgrades {#scheduled-upgrades}

docs/cloud/reference/changelog.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,13 @@ import dashboards from '@site/static/images/cloud/reference/may-30-dashboards.pn
3131

3232
In addition to this ClickHouse Cloud changelog, please see the [Cloud Compatibility](/cloud/reference/cloud-compatibility.md) page.
3333

34+
## August 8, 2025 {#august-08-2025}
35+
36+
- **Notifications**: Users will now receive a UI notification when their service starts upgrading to a new ClickHouse version. Additional Email and Slack notifications can be added via the notification center.
37+
- **ClickPipes**: Azure Blob Storage (ABS) ClickPipes support was added to the ClickHouse Terraform provider. See the provider documentation for an example of how to programmatically create an ABS ClickPipe.
38+
- [Bug fix] Object storage ClickPipes writing to a destination table using the Null engine now report "Total records" and "Data ingested" metrics in the UI.
39+
- [Bug fix] The “Time period” selector for metrics in the UI defaulted to “24 hours” regardless of the selected time period. This has now been fixed, and the UI correctly updates the charts for the selected time period.
40+
- **Cross-region private link (AWS)** is now Generally Available. Please refer to the [documentation](/manage/security/aws-privatelink) for the list of supported regions.
3441

3542
## July 31, 2025 {#july-31-2025}
3643

docs/cloud/security/setting-ip-filters.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,11 @@ Classless Inter-domain Routing (CIDR) notation, allows you to specify IP address
2424

2525
## Create or modify an IP access list {#create-or-modify-an-ip-access-list}
2626

27+
:::note Applicable only to connections outside of PrivateLink
28+
IP access lists only apply to connections from the public internet, outside of [PrivateLink](/cloud/security/private-link-overview).
29+
If you only want traffic from PrivateLink, set `DenyAll` in IP Allow list.
30+
:::
31+
2732
<details>
2833
<summary>IP access list for ClickHouse services</summary>
2934

@@ -63,15 +68,15 @@ This screenshot shows an access list which allows traffic from a range of IP add
6368

6469
<Image img={ip_filter_add_single_ip} size="md" alt="Adding a single IP to the access list in ClickHouse Cloud" border/>
6570

66-
1. Delete an existing entry
71+
2. Delete an existing entry
6772

6873
Clicking the cross (x) can deletes an entry
6974

70-
1. Edit an existing entry
75+
3. Edit an existing entry
7176

7277
Directly modifying the entry
7378

74-
1. Switch to allow access from **Anywhere**
79+
4. Switch to allow access from **Anywhere**
7580

7681
This is not recommended, but it is allowed. We recommend that you expose an application built on top of ClickHouse to the public and restrict access to the back-end ClickHouse Cloud service.
7782

docs/deployment-guides/replication-sharding-examples/_snippets/_verify_keeper_using_mntr.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Run the command below from a shell on `clickhouse-keeper-01`, `clickhouse-keeper
1616
for `clickhouse-keeper-01` is shown below:
1717

1818
```bash
19-
docker exec -it clickhouse-keeper-01 echo mntr | nc localhost 9181
19+
docker exec -it clickhouse-keeper-01 /bin/sh -c 'echo mntr | nc 127.0.0.1 9181'
2020
```
2121

2222
The response below shows an example response from a follower node:
@@ -67,4 +67,4 @@ zk_max_file_descriptor_count 18446744073709551615
6767
zk_followers 2
6868
zk_synced_followers 2
6969
# highlight-end
70-
```
70+
```

docs/integrations/data-ingestion/clickpipes/object-storage.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,6 @@ To increase the throughput on large ingest jobs, we recommend scaling the ClickH
128128
- Role authentication is not available for S3 ClickPipes for ClickHouse Cloud instances deployed into GCP or Azure. It is only supported for AWS ClickHouse Cloud instances.
129129
- ClickPipes will only attempt to ingest objects at 10GB or smaller in size. If a file is greater than 10GB an error will be appended to the ClickPipes dedicated error table.
130130
- Azure Blob Storage pipes with continuous ingest on containers with over 100k files will have a latency of around 10–15 seconds in detecting new files. Latency increases with file count.
131-
- Object Storage ClickPipes ClickPipes inserting into a destination using [Null table engine](/engines/table-engines/special/null) won't have data for "Total records" or "Data ingested" in the UI.
132131
- Object Storage ClickPipes **does not** share a listing syntax with the [S3 Table Function](/sql-reference/table-functions/s3), nor Azure with the [AzureBlobStorage Table function](/sql-reference/table-functions/azureBlobStorage).
133132
- `?` — Substitutes any single character
134133
- `*` — Substitutes any number of any characters except / including empty string
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
---
2+
title: 'Scaling DB ClickPipes via OpenAPI'
3+
description: 'Doc for scaling DB ClickPipes via OpenAPI'
4+
slug: /integrations/clickpipes/postgres/scaling
5+
sidebar_label: 'Scaling'
6+
---
7+
8+
:::caution Most users won't need this API
9+
Default configuration of DB ClickPipes is designed to handle the majority of workloads out of the box. If you think your workload requires scaling, open a [support case](https://clickhouse.com/support/program) and we'll guide you through the optimal settings for the use case.
10+
:::
11+
12+
Scaling API may be useful for:
13+
- Large initial loads (over 4 TB)
14+
- Migrating a moderate amount of data as quickly as possible
15+
- Supporting over 8 CDC ClickPipes under the same service
16+
17+
Before attempting to scale up, consider:
18+
- Ensuring the source DB has sufficient available capacity
19+
- First adjusting [initial load parallelism and partitioning](/integrations/data-ingestion/clickpipes/postgres/parallel_initial_load) when creating a ClickPipe
20+
- Checking for [long-running transactions](/integrations/clickpipes/postgres/sync_control#transactions-pg-sync) on the source that could be causing CDC delays
21+
22+
**Increasing the scale will proportionally increase your ClickPipes compute costs.** If you're scaling up just for the initial loads, it's important to scale down after the snapshot is finished to avoid unexpected charges. For more details on pricing, see [Postgres CDC Pricing](/cloud/manage/billing/overview#clickpipes-for-postgres-cdc).
23+
24+
## Prerequisites for this process {#prerequisites}
25+
26+
Before you get started you will need:
27+
28+
1. [ClickHouse API key](/cloud/manage/openapi) with Admin permissions on the target ClickHouse Cloud service.
29+
2. A DB ClickPipe (Postgres, MySQL or MongoDB) provisioned in the service at some point in time. CDC infrastructure gets created along with the first ClickPipe, and the scaling endpoints become available from that point onwards.
30+
31+
## Steps to scale DB ClickPipes {#cdc-scaling-steps}
32+
33+
Set the following environment variables before running any commands:
34+
35+
```bash
36+
ORG_ID=<Your ClickHouse organization ID>
37+
SERVICE_ID=<Your ClickHouse service ID>
38+
KEY_ID=<Your ClickHouse key ID>
39+
KEY_SECRET=<Your ClickHouse key secret>
40+
```
41+
42+
Fetch the current scaling configuration (optional):
43+
44+
```bash
45+
curl --silent --user $KEY_ID:$KEY_SECRET \
46+
https://api.clickhouse.cloud/v1/organizations/$ORG_ID/services/$SERVICE_ID/clickpipesCdcScaling \
47+
| jq
48+
49+
# example result:
50+
{
51+
"result": {
52+
"replicaCpuMillicores": 2000,
53+
"replicaMemoryGb": 8
54+
},
55+
"requestId": "04310d9e-1126-4c03-9b05-2aa884dbecb7",
56+
"status": 200
57+
}
58+
```
59+
60+
Set the desired scaling. Supported configurations include 1-24 CPU cores with memory (GB) set to 4× the core count:
61+
62+
```bash
63+
cat <<EOF | tee cdc_scaling.json
64+
{
65+
"replicaCpuMillicores": 24000,
66+
"replicaMemoryGb": 96
67+
}
68+
EOF
69+
70+
curl --silent --user $KEY_ID:$KEY_SECRET \
71+
-X PATCH -H "Content-Type: application/json" \
72+
https://api.clickhouse.cloud/v1/organizations/$ORG_ID/services/$SERVICE_ID/clickpipesCdcScaling \
73+
-d @cdc_scaling.json | jq
74+
```
75+
76+
Wait for the configuration to propagate (typically 3-5 minutes). After the scaling is finished, the GET endpoint will reflect the new values:
77+
78+
```bash
79+
curl --silent --user $KEY_ID:$KEY_SECRET \
80+
https://api.clickhouse.cloud/v1/organizations/$ORG_ID/services/$SERVICE_ID/clickpipesCdcScaling \
81+
| jq
82+
83+
# example result:
84+
{
85+
"result": {
86+
"replicaCpuMillicores": 24000,
87+
"replicaMemoryGb": 96
88+
},
89+
"requestId": "5a76d642-d29f-45af-a857-8c4d4b947bf0",
90+
"status": 200
91+
}
92+
```

docs/integrations/data-ingestion/data-sources-index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
slug: /integrations/index
2+
slug: /integrations/data-sources/index
33
keywords: ['AWS S3', 'Azure Data Factory', 'PostgreSQL', 'Kafka', 'MySQL', 'Cassandra', 'Data Factory', 'Redis', 'RabbitMQ', 'MongoDB', 'Google Cloud Storage', 'Hive', 'Hudi', 'Iceberg', 'MinIO', 'Delta Lake', 'RocksDB', 'Splunk', 'SQLite', 'NATS', 'EMQX', 'local files', 'JDBC', 'ODBC']
44
description: 'Datasources overview page'
55
title: 'Data Sources'

docs/integrations/data-visualization/tableau/tableau-analysis-tips.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ ClickHouse has a huge number of functions that can be used for data analysis —
2525
███████████████ 259.37 million
2626
```
2727
- **`COUNTD_UNIQ([my_field])`** *(added in v0.2.0)* — Calculates the approximate number of different values of the argument. Equivalent of [uniq()](/sql-reference/aggregate-functions/reference/uniq/). Much faster than `COUNTD()`.
28-
- **`DATE_BIN('day', 10, [my_datetime_or_date])`** *(added in v0.2.1)* — equivalent of [`toStartOfInterval()`](/sql-reference/functions/date-time-functions#tostartofinterval) in ClickHouse. Rounds down a Date or Date & Time to the given interval, for example:
28+
- **`DATE_BIN('day', 10, [my_datetime_or_date])`** *(added in v0.2.1)* — equivalent of [`toStartOfInterval()`](/sql-reference/functions/date-time-functions#toStartOfInterval) in ClickHouse. Rounds down a Date or Date & Time to the given interval, for example:
2929
```text
3030
== my_datetime_or_date == | == DATE_BIN('day', 10, [my_datetime_or_date]) ==
3131
28.07.2004 06:54:50 | 21.07.2004 00:00:00

0 commit comments

Comments
 (0)