Skip to content

Commit 3ef09d0

Browse files
authored
Merge branch 'main' into cloud-disaster-recovery
2 parents 6714b01 + 4453f28 commit 3ef09d0

File tree

35 files changed

+5851
-1116
lines changed

35 files changed

+5851
-1116
lines changed

docs/chdb/api/python.md

Lines changed: 3517 additions & 0 deletions
Large diffs are not rendered by default.

docs/chdb/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ You can use it when you want to get the power of ClickHouse in a programming lan
2727

2828
chDB has the following language bindings:
2929

30-
* [Python](install/python.md)
30+
* [Python](install/python.md) - [API Reference](api/python.md)
3131
* [Go](install/go.md)
3232
* [Rust](install/rust.md)
3333
* [NodeJS](install/nodejs.md)

docs/chdb/install/index.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,11 @@ doc_type: 'landing-page'
88

99
Instructions for how to get setup with chDB are available below for the following languages and runtimes:
1010

11-
| Language |
12-
|----------------------------------------|
13-
| [Python](/chdb/install/python) |
14-
| [NodeJS](/chdb/install/nodejs) |
15-
| [Go](/chdb/install/go) |
16-
| [Rust](/chdb/install/rust) |
17-
| [Bun](/chdb/install/bun) |
18-
| [C and C++](/chdb/install/c) |
11+
| Language | API Reference |
12+
|----------------------------------------|-------------------------------------|
13+
| [Python](/chdb/install/python) | [Python API](/chdb/api/python) |
14+
| [NodeJS](/chdb/install/nodejs) | |
15+
| [Go](/chdb/install/go) | |
16+
| [Rust](/chdb/install/rust) | |
17+
| [Bun](/chdb/install/bun) | |
18+
| [C and C++](/chdb/install/c) | |
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
slug: /cloud/guides/sql-console/gather-connection-details
3+
sidebar_label: 'Gather your connection details'
4+
title: 'Gather your connection details'
5+
description: 'Gather your connection details'
6+
doc_type: 'guide'
7+
---
8+
9+
import ConnectionDetails from '@site/docs/_snippets/_gather_your_details_http.mdx';
10+
11+
<ConnectionDetails />

docs/cloud/reference/03_billing/01_billing_overview.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,14 @@ Best for: large scale, mission critical deployments that have stringent security
180180

181181
A ClickHouse Credit is a unit of credit toward Customer's usage of ClickHouse Cloud equal to one (1) US dollar, to be applied based on ClickHouse's then-current published price list.
182182

183+
### Where can I find legacy pricing? {#find-legacy-pricing}
184+
185+
Legacy pricing information can be found [here](https://clickhouse.com/pricing?legacy=true).
186+
187+
:::note
188+
If you are being billed through Stripe then you will see that 1 CHC is equal to \$0.01 USD on your Stripe invoice. This is to allow accurate billing on Stripe due to their limitation on not being able to bill fractional quantities of our standard SKU of 1 CHC = \$1 USD.
189+
:::
190+
183191
### How is compute metered? {#how-is-compute-metered}
184192

185193
ClickHouse Cloud meters compute on a per-minute basis, in 8G RAM increments.

docs/integrations/data-ingestion/clickpipes/index.md

Lines changed: 20 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -86,11 +86,29 @@ Steps:
8686
## Adjusting ClickPipes advanced settings {#clickpipes-advanced-settings}
8787
ClickPipes provides sensible defaults that cover the requirements of most use cases. If your use case requires additional fine-tuning, you can adjust the following settings:
8888

89-
- **Streaming max insert wait time**: Configures the maximum wait period before inserting data into the ClickHouse cluster. Applies to streaming ClickPipes (e.g., Kafka, Kinesis).
90-
- **Object storage polling interval**: Configures how frequently ClickPipes checks object storage for new data. Applies to object storage ClickPipes (e.g., S3, GCS).
89+
### Object Storage ClickPipes {#clickpipes-advanced-settings-object-storage}
90+
91+
| Setting | Default value | Description |
92+
|------------------------------------|---------------|---------------------------------------------------------------------------------------|
93+
| `Max insert bytes` | 10GB | Number of bytes to process in a single insert batch. |
94+
| `Max file count` | 100 | Maximum number of files to process in a single insert batch. |
95+
| `Max threads` | auto(3) | [Maximum number of concurrent threads](/operations/settings/settings#max_threads) for file processing. |
96+
| `Max insert threads` | 1 | [Maximum number of concurrent insert threads](/operations/settings/settings#max_insert_threads) for file processing. |
97+
| `Min insert block size bytes` | 1GB | [Minimum size of bytes in the block](/operations/settings/settings#min_insert_block_size_bytes) which can be inserted into a table. |
98+
| `Max download threads` | 4 | [Maximum number of concurrent download threads](/operations/settings/settings#max_download_threads). |
99+
| `Object storage polling interval` | 30s | Configures the maximum wait period before inserting data into the ClickHouse cluster. |
100+
| `Parallel distributed insert select` | 2 | [Parallel distributed insert select setting](/operations/settings/settings#parallel_distributed_insert_select). |
101+
| `Parallel view processing` | false | Whether to enable pushing to attached views [concurrently instead of sequentially](/operations/settings/settings#parallel_view_processing). |
102+
| `Use cluster function` | true | Whether to process files in parallel across multiple nodes. |
91103

92104
<Image img={cp_advanced_settings} alt="Advanced settings for ClickPipes" size="lg" border/>
93105

106+
### Streaming ClickPipes {#clickpipes-advanced-settings-streaming}
107+
108+
| Setting | Default value | Description |
109+
|------------------------------------|---------------|---------------------------------------------------------------------------------------|
110+
| `Streaming max insert wait time` | 5s | Configures the maximum wait period before inserting data into the ClickHouse cluster. |
111+
94112
## Error reporting {#error-reporting}
95113
ClickPipes will store errors in two separate tables depending on the type of error encountered during the ingestion process.
96114
### Record Errors {#record-errors}

docs/integrations/data-ingestion/clickpipes/secure-kinesis.md

Lines changed: 46 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ sidebar_label: 'Kinesis Role-Based Access'
44
title: 'Kinesis Role-Based Access'
55
description: 'This article demonstrates how ClickPipes customers can leverage role-based access to authenticate with Amazon Kinesis and access their data streams securely.'
66
doc_type: 'guide'
7+
keywords: ['Amazon Kinesis']
78
---
89

910
import secure_kinesis from '@site/static/images/integrations/data-ingestion/clickpipes/securekinesis.jpg';
@@ -12,6 +13,12 @@ import Image from '@theme/IdealImage';
1213

1314
This article demonstrates how ClickPipes customers can leverage role-based access to authenticate with Amazon Kinesis and access their data streams securely.
1415

16+
## Prerequisites {#prerequisite}
17+
18+
To follow this guide, you will need:
19+
- An active ClickHouse Cloud service
20+
- An AWS account
21+
1522
## Introduction {#introduction}
1623

1724
Before diving into the setup for secure Kinesis access, it's important to understand the mechanism. Here's an overview of how ClickPipes can access Amazon Kinesis streams by assuming a role within customers' AWS accounts.
@@ -22,92 +29,70 @@ Using this approach, customers can manage all access to their Kinesis data strea
2229

2330
## Setup {#setup}
2431

25-
### Obtaining the ClickHouse service IAM role Arn {#obtaining-the-clickhouse-service-iam-role-arn}
26-
27-
1 - Login to your ClickHouse cloud account.
32+
<VerticalStepper headerLevel="h3"/>
2833

29-
2 - Select the ClickHouse service you want to create the integration
30-
31-
3 - Select the **Settings** tab
32-
33-
4 - Scroll down to the **Network security information** section at the bottom of the page
34+
### Obtaining the ClickHouse service IAM role Arn {#obtaining-the-clickhouse-service-iam-role-arn}
3435

35-
5 - Copy the **Service role ID (IAM)** value belong to the service as shown below.
36+
- 1. Login to your ClickHouse cloud account.
37+
- 2. Select the ClickHouse service you want to create the integration
38+
- 3. Select the **Settings** tab
39+
- 4. Scroll down to the **Network security information** section at the bottom of the page
40+
- 5. Copy the **Service role ID (IAM)** value belong to the service as shown below.
3641

3742
<Image img={secures3_arn} alt="Secure S3 ARN" size="lg" border/>
3843

3944
### Setting up IAM assume role {#setting-up-iam-assume-role}
4045

4146
#### Manually create IAM role. {#manually-create-iam-role}
4247

43-
1 - Login to your AWS Account in the web browser with an IAM user that has permission to create & manage IAM role.
44-
45-
2 - Browse to IAM Service Console
48+
- 1. Login to your AWS Account in the web browser with an IAM user that has permission to create & manage IAM role.
49+
- 2. Browse to IAM Service Console
50+
- 3. Create a new IAM role with Trusted Entity Type of `AWS account`. Note that the name of the IAM role **must start with** `ClickHouseAccessRole-` for this to work.
4651

47-
3 - Create a new IAM role with the following IAM & Trust policy. Note that the name of the IAM role **must start with** `ClickHouseAccessRole-` for this to work.
48-
49-
Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong to your ClickHouse instance):
52+
For the trust policy, please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong to your ClickHouse instance.
53+
For the IAM policy, please replace `{STREAM_NAME}` with your Kinesis stream name.
5054

5155
```json
5256
{
5357
"Version": "2012-10-17",
5458
"Statement": [
5559
{
60+
"Sid": "Statement1",
5661
"Effect": "Allow",
5762
"Principal": {
5863
"AWS": "{ClickHouse_IAM_ARN}"
5964
},
6065
"Action": "sts:AssumeRole"
66+
},
67+
{
68+
"Action": [
69+
"kinesis:DescribeStream",
70+
"kinesis:GetShardIterator",
71+
"kinesis:GetRecords",
72+
"kinesis:ListShards",
73+
"kinesis:SubscribeToShard",
74+
"kinesis:DescribeStreamConsumer",
75+
"kinesis:RegisterStreamConsumer",
76+
"kinesis:DeregisterStreamConsumer",
77+
"kinesis:ListStreamConsumers"
78+
],
79+
"Resource": [
80+
"arn:aws:kinesis:region:account-id:stream/{STREAM_NAME}/*"
81+
],
82+
"Effect": "Allow"
83+
},
84+
{
85+
"Action": [
86+
"kinesis:ListStreams"
87+
],
88+
"Resource": "*",
89+
"Effect": "Allow"
6190
}
6291
]
6392
}
64-
```
6593

66-
IAM policy (Please replace `{STREAM_NAME}` with your Kinesis stream name):
94+
</VerticalStepper>
6795

68-
```json
69-
{
70-
"Version": "2012-10-17",
71-
"Statement": [
72-
{
73-
"Action": [
74-
"kinesis:DescribeStream",
75-
"kinesis:GetShardIterator",
76-
"kinesis:GetRecords",
77-
"kinesis:ListShards",
78-
"kinesis:SubscribeToShard",
79-
"kinesis:DescribeStreamConsumer",
80-
"kinesis:RegisterStreamConsumer",
81-
"kinesis:DeregisterStreamConsumer",
82-
"kinesis:ListStreamConsumers"
83-
],
84-
"Resource": [
85-
"arn:aws:kinesis:region:account-id:stream/{STREAM_NAME}"
86-
],
87-
"Effect": "Allow"
88-
},
89-
{
90-
"Action": [
91-
"kinesis:SubscribeToShard",
92-
"kinesis:DescribeStreamConsumer",
93-
"kinesis:RegisterStreamConsumer",
94-
"kinesis:DeregisterStreamConsumer"
95-
],
96-
"Resource": [
97-
"arn:aws:kinesis:region:account-id:stream/{STREAM_NAME}/*"
98-
],
99-
"Effect": "Allow"
100-
},
101-
{
102-
"Action": [
103-
"kinesis:ListStreams"
104-
],
105-
"Resource": "*",
106-
"Effect": "Allow"
107-
}
108-
]
109-
110-
}
11196
```
11297

113-
4 - Copy the new **IAM Role Arn** after creation. This is what needed to access your Kinesis stream.
98+
- 4. Copy the new **IAM Role Arn** after creation. This is what is needed to access your Kinesis stream.

docs/integrations/data-ingestion/dbms/postgresql/connecting-to-postgresql.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,11 +14,14 @@ import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
1414

1515
This page covers following options for integrating PostgreSQL with ClickHouse:
1616

17-
- using [ClickPipes](/integrations/clickpipes/postgres), the managed integration service for ClickHouse Cloud powered by PeerDB.
18-
- using [PeerDB](https://github.com/PeerDB-io/peerdb), an open-source CDC tool specifically designed for PostgreSQL database replication to both self-hosted ClickHouse and ClickHouse Cloud.
1917
- using the `PostgreSQL` table engine, for reading from a PostgreSQL table
2018
- using the experimental `MaterializedPostgreSQL` database engine, for syncing a database in PostgreSQL with a database in ClickHouse
2119

20+
:::tip
21+
We recommend using [ClickPipes](/integrations/clickpipes/postgres), a managed integration service for ClickHouse Cloud powered by PeerDB.
22+
Alternatively, [PeerDB](https://github.com/PeerDB-io/peerdb) is available as an an open-source CDC tool specifically designed for PostgreSQL database replication to both self-hosted ClickHouse and ClickHouse Cloud.
23+
:::
24+
2225
## Using the PostgreSQL table engine {#using-the-postgresql-table-engine}
2326

2427
The `PostgreSQL` table engine allows **SELECT** and **INSERT** operations on data stored on the remote PostgreSQL server from ClickHouse.

0 commit comments

Comments
 (0)