You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/cloud/reference/03_billing/01_billing_overview.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -180,6 +180,14 @@ Best for: large scale, mission critical deployments that have stringent security
180
180
181
181
A ClickHouse Credit is a unit of credit toward Customer's usage of ClickHouse Cloud equal to one (1) US dollar, to be applied based on ClickHouse's then-current published price list.
182
182
183
+
### Where can I find legacy pricing? {#find-legacy-pricing}
184
+
185
+
Legacy pricing information can be found [here](https://clickhouse.com/pricing?legacy=true).
186
+
187
+
:::note
188
+
If you are being billed through Stripe then you will see that 1 CHC is equal to \$0.01 USD on your Stripe invoice. This is to allow accurate billing on Stripe due to their limitation on not being able to bill fractional quantities of our standard SKU of 1 CHC = \$1 USD.
189
+
:::
190
+
183
191
### How is compute metered? {#how-is-compute-metered}
184
192
185
193
ClickHouse Cloud meters compute on a per-minute basis, in 8G RAM increments.
ClickPipes provides sensible defaults that cover the requirements of most use cases. If your use case requires additional fine-tuning, you can adjust the following settings:
88
88
89
-
-**Streaming max insert wait time**: Configures the maximum wait period before inserting data into the ClickHouse cluster. Applies to streaming ClickPipes (e.g., Kafka, Kinesis).
90
-
-**Object storage polling interval**: Configures how frequently ClickPipes checks object storage for new data. Applies to object storage ClickPipes (e.g., S3, GCS).
|`Max insert bytes`| 10GB | Number of bytes to process in a single insert batch. |
94
+
|`Max file count`| 100 | Maximum number of files to process in a single insert batch. |
95
+
|`Max threads`| auto(3) |[Maximum number of concurrent threads](/operations/settings/settings#max_threads) for file processing. |
96
+
|`Max insert threads`| 1 |[Maximum number of concurrent insert threads](/operations/settings/settings#max_insert_threads) for file processing. |
97
+
|`Min insert block size bytes`| 1GB |[Minimum size of bytes in the block](/operations/settings/settings#min_insert_block_size_bytes) which can be inserted into a table. |
98
+
|`Max download threads`| 4 |[Maximum number of concurrent download threads](/operations/settings/settings#max_download_threads). |
99
+
|`Object storage polling interval`| 30s | Configures the maximum wait period before inserting data into the ClickHouse cluster. |
description: 'This article demonstrates how ClickPipes customers can leverage role-based access to authenticate with Amazon Kinesis and access their data streams securely.'
6
6
doc_type: 'guide'
7
+
keywords: ['Amazon Kinesis']
7
8
---
8
9
9
10
import secure_kinesis from '@site/static/images/integrations/data-ingestion/clickpipes/securekinesis.jpg';
@@ -12,6 +13,12 @@ import Image from '@theme/IdealImage';
12
13
13
14
This article demonstrates how ClickPipes customers can leverage role-based access to authenticate with Amazon Kinesis and access their data streams securely.
14
15
16
+
## Prerequisites {#prerequisite}
17
+
18
+
To follow this guide, you will need:
19
+
- An active ClickHouse Cloud service
20
+
- An AWS account
21
+
15
22
## Introduction {#introduction}
16
23
17
24
Before diving into the setup for secure Kinesis access, it's important to understand the mechanism. Here's an overview of how ClickPipes can access Amazon Kinesis streams by assuming a role within customers' AWS accounts.
@@ -22,92 +29,70 @@ Using this approach, customers can manage all access to their Kinesis data strea
22
29
23
30
## Setup {#setup}
24
31
25
-
### Obtaining the ClickHouse service IAM role Arn {#obtaining-the-clickhouse-service-iam-role-arn}
26
-
27
-
1 - Login to your ClickHouse cloud account.
32
+
<VerticalStepperheaderLevel="h3"/>
28
33
29
-
2 - Select the ClickHouse service you want to create the integration
30
-
31
-
3 - Select the **Settings** tab
32
-
33
-
4 - Scroll down to the **Network security information** section at the bottom of the page
34
+
### Obtaining the ClickHouse service IAM role Arn {#obtaining-the-clickhouse-service-iam-role-arn}
34
35
35
-
5 - Copy the **Service role ID (IAM)** value belong to the service as shown below.
36
+
-1. Login to your ClickHouse cloud account.
37
+
-2. Select the ClickHouse service you want to create the integration
38
+
-3. Select the **Settings** tab
39
+
-4. Scroll down to the **Network security information** section at the bottom of the page
40
+
-5. Copy the **Service role ID (IAM)** value belong to the service as shown below.
### Setting up IAM assume role {#setting-up-iam-assume-role}
40
45
41
46
#### Manually create IAM role. {#manually-create-iam-role}
42
47
43
-
1 - Login to your AWS Account in the web browser with an IAM user that has permission to create & manage IAM role.
44
-
45
-
2 - Browse to IAM Service Console
48
+
-1. Login to your AWS Account in the web browser with an IAM user that has permission to create & manage IAM role.
49
+
-2. Browse to IAM Service Console
50
+
-3. Create a new IAM role with Trusted Entity Type of `AWS account`. Note that the name of the IAM role **must start with**`ClickHouseAccessRole-` for this to work.
46
51
47
-
3 - Create a new IAM role with the following IAM & Trust policy. Note that the name of the IAM role **must start with**`ClickHouseAccessRole-` for this to work.
48
-
49
-
Trust policy (Please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong to your ClickHouse instance):
52
+
For the trust policy, please replace `{ClickHouse_IAM_ARN}` with the IAM Role arn belong to your ClickHouse instance.
53
+
For the IAM policy, please replace `{STREAM_NAME}` with your Kinesis stream name.
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/dbms/postgresql/connecting-to-postgresql.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,11 +14,14 @@ import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
14
14
15
15
This page covers following options for integrating PostgreSQL with ClickHouse:
16
16
17
-
- using [ClickPipes](/integrations/clickpipes/postgres), the managed integration service for ClickHouse Cloud powered by PeerDB.
18
-
- using [PeerDB](https://github.com/PeerDB-io/peerdb), an open-source CDC tool specifically designed for PostgreSQL database replication to both self-hosted ClickHouse and ClickHouse Cloud.
19
17
- using the `PostgreSQL` table engine, for reading from a PostgreSQL table
20
18
- using the experimental `MaterializedPostgreSQL` database engine, for syncing a database in PostgreSQL with a database in ClickHouse
21
19
20
+
:::tip
21
+
We recommend using [ClickPipes](/integrations/clickpipes/postgres), a managed integration service for ClickHouse Cloud powered by PeerDB.
22
+
Alternatively, [PeerDB](https://github.com/PeerDB-io/peerdb) is available as an an open-source CDC tool specifically designed for PostgreSQL database replication to both self-hosted ClickHouse and ClickHouse Cloud.
23
+
:::
24
+
22
25
## Using the PostgreSQL table engine {#using-the-postgresql-table-engine}
23
26
24
27
The `PostgreSQL` table engine allows **SELECT** and **INSERT** operations on data stored on the remote PostgreSQL server from ClickHouse.
0 commit comments