You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/_snippets/_gather_your_details_http.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,10 +12,10 @@ To connect to ClickHouse with HTTP(S) you need this information:
12
12
13
13
The details for your ClickHouse Cloud service are available in the ClickHouse Cloud console. Select the service that you will connect to and click **Connect**:
14
14
15
-
<Imageimg={cloud_connect_button}size="md"alt="ClickHouse Cloud service connect button" />
15
+
<Imageimg={cloud_connect_button}size="md"alt="ClickHouse Cloud service connect button"border/>
16
16
17
17
Choose **HTTPS**, and the details are available in an example `curl` command.
Copy file name to clipboardExpand all lines: docs/_snippets/_gather_your_details_native.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,10 +13,10 @@ To connect to ClickHouse with native TCP you need this information:
13
13
14
14
The details for your ClickHouse Cloud service are available in the ClickHouse Cloud console. Select the service that you will connect to and click **Connect**:
15
15
16
-
<Imageimg={cloud_connect_button}size="md"alt="ClickHouse Cloud service connect button" />
16
+
<Imageimg={cloud_connect_button}size="md"alt="ClickHouse Cloud service connect button"border/>
17
17
18
18
Choose **Native**, and the details are available in an example `clickhouse-client` command.
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/clickpipes/index.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ import Image from '@theme/IdealImage';
25
25
26
26
[ClickPipes](/integrations/clickpipes) is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons. Designed for the most demanding workloads, ClickPipes's robust and scalable architecture ensures consistent performance and reliability. ClickPipes can be used for long-term streaming needs or one-time data loading job.
## Supported Data Sources {#supported-data-sources}
31
31
@@ -67,7 +67,7 @@ Steps:
67
67
1. create a custom role `CREATE ROLE my_clickpipes_role SETTINGS ...`. See [CREATE ROLE](/sql-reference/statements/create/role.md) syntax for details.
68
68
2. add the custom role to ClickPipes user on step `Details and Settings` during the ClickPipes creation.
69
69
70
-
<Imageimg={cp_custom_role}alt="Assign a custom role"size="lg" />
70
+
<Imageimg={cp_custom_role}alt="Assign a custom role"size="lg"border/>
71
71
72
72
## Error reporting {#error-reporting}
73
73
ClickPipes will create a table next to your destination table with the postfix `<destination_table_name>_clickpipes_error`. This table will contain any errors from the operations of your ClickPipe (network, connectivity, etc.) and also any data that don't conform to the schema. The error table has a [TTL](/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) of 7 days.
<Imageimg={cp_step1}alt="Select data source type"size="lg"/>
47
+
<Imageimg={cp_step1}alt="Select data source type"size="lg"border/>
48
48
49
49
4. Fill out the form by providing your ClickPipe with a name, a description (optional), your credentials, and other connection details.
50
50
51
-
<Imageimg={cp_step2}alt="Fill out connection details"size="lg"/>
51
+
<Imageimg={cp_step2}alt="Fill out connection details"size="lg"border/>
52
52
53
53
5. Configure the schema registry. A valid schema is required for Avro streams and optional for JSON. This schema will be used to parse [AvroConfluent](../../../interfaces/formats.md/#data-format-avro-confluent) or validate JSON messages on the selected topic.
54
54
- Avro messages that cannot be parsed or JSON messages that fail validation will generate an error.
@@ -63,41 +63,41 @@ without an embedded schema id, then the specific schema ID or subject must be sp
63
63
64
64
6. Select your topic and the UI will display a sample document from the topic.
65
65
66
-
<Imageimg={cp_step3}alt="Set data format and topic"size="lg"/>
66
+
<Imageimg={cp_step3}alt="Set data format and topic"size="lg"border/>
67
67
68
68
7. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
69
69
70
-
<Imageimg={cp_step4a}alt="Set table, schema, and settings"size="lg"/>
70
+
<Imageimg={cp_step4a}alt="Set table, schema, and settings"size="lg"border/>
71
71
72
72
You can also customize the advanced settings using the controls provided
8. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
77
77
78
-
<Imageimg={cp_step4b}alt="Use an existing table"size="lg"/>
78
+
<Imageimg={cp_step4b}alt="Use an existing table"size="lg"border/>
79
79
80
80
9. Finally, you can configure permissions for the internal ClickPipes user.
81
81
82
82
**Permissions:** ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
83
83
- `Full access`: with the full access to the cluster. It might be useful if you use Materialized View or Dictionary with the destination table.
84
84
- `Only destination table`: with the `INSERT` permissions to the destination table only.
11.**Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source.
<Imageimg={cp_step1}alt="Select data source type"size="lg" />
40
+
<Imageimg={cp_step1}alt="Select data source type"size="lg"border/>
41
41
42
42
4. Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and other connection details.
43
43
44
-
<Imageimg={cp_step2_kinesis}alt="Fill out connection details"size="lg" />
44
+
<Imageimg={cp_step2_kinesis}alt="Fill out connection details"size="lg"border/>
45
45
46
46
5. Select Kinesis Stream and starting offset. The UI will display a sample document from the selected source (Kafka topic, etc). You can also enable Enhanced Fan-out for Kinesis streams to improve the performance and stability of your ClickPipe (More information on Enhanced Fan-out can be found [here](https://aws.amazon.com/blogs/aws/kds-enhanced-fanout))
47
47
48
-
<Imageimg={cp_step3_kinesis}alt="Set data format and topic"size="lg" />
48
+
<Imageimg={cp_step3_kinesis}alt="Set data format and topic"size="lg"border/>
49
49
50
50
6. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
51
51
52
-
<Imageimg={cp_step4a}alt="Set table, schema, and settings"size="lg" />
52
+
<Imageimg={cp_step4a}alt="Set table, schema, and settings"size="lg"border/>
53
53
54
54
You can also customize the advanced settings using the controls provided
7. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
59
59
60
-
<Imageimg={cp_step4b}alt="Use an existing table"size="lg" />
60
+
<Imageimg={cp_step4b}alt="Use an existing table"size="lg"border/>
61
61
62
62
8. Finally, you can configure permissions for the internal ClickPipes user.
63
63
64
64
**Permissions:** ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
65
65
- `Full access`: with the full access to the cluster. It might be useful if you use Materialized View or Dictionary with the destination table.
66
66
- `Only destination table`: with the `INSERT` permissions to the destination table only.
67
67
68
-
<Imageimg={cp_step5}alt="Permissions" />
68
+
<Imageimg={cp_step5}alt="Permissions"border/>
69
69
70
70
9. By clicking on "Complete Setup", the system will register you ClickPipe, and you'll be able to see it listed in the summary table.
10.**Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.
<Imageimg={cp_step1}alt="Select data source type"size="lg"/>
39
+
<Imageimg={cp_step1}alt="Select data source type"size="lg"border/>
40
40
41
41
3. Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and bucket URL. You can specify multiple files using bash-like wildcards. For more information, [see the documentation on using wildcards in path](#limitations).
42
42
43
-
<Imageimg={cp_step2_object_storage}alt="Fill out connection details"size="lg"/>
43
+
<Imageimg={cp_step2_object_storage}alt="Fill out connection details"size="lg"border/>
44
44
45
45
4. The UI will display a list of files in the specified bucket. Select your data format (we currently support a subset of ClickHouse formats) and if you want to enable continuous ingestion [More details below](#continuous-ingest).
46
46
47
-
<Imageimg={cp_step3_object_storage}alt="Set data format and topic"size="lg"/>
47
+
<Imageimg={cp_step3_object_storage}alt="Set data format and topic"size="lg"border/>
48
48
49
49
5. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
50
50
51
-
<Imageimg={cp_step4a}alt="Set table, schema, and settings"size="lg"/>
51
+
<Imageimg={cp_step4a}alt="Set table, schema, and settings"size="lg"border/>
52
52
53
53
You can also customize the advanced settings using the controls provided
6. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
58
58
59
-
<Imageimg={cp_step4b}alt="Use an existing table"size="lg"/>
59
+
<Imageimg={cp_step4b}alt="Use an existing table"size="lg"border/>
60
60
61
61
:::info
62
62
You can also map [virtual columns](../../sql-reference/table-functions/s3#virtual-columns), like `_path` or `_size`, to fields.
@@ -68,21 +68,22 @@ You can also map [virtual columns](../../sql-reference/table-functions/s3#virtua
68
68
- `Full access`: with the full access to the cluster. Required if you use Materialized View or Dictionary with the destination table.
69
69
- `Only destination table`: with the `INSERT` permissions to the destination table only.
9.**Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.
3. To use Key-based authentication, click on "Revoke and generate key pair" to generate a new key pair and copy the generated public key to your SSH server under `~/.ssh/authorized_keys`.
99
99
4. Click on "Verify Connection" to verify the connection.
@@ -110,7 +110,7 @@ Once the connection details are filled in, click on "Next".
110
110
111
111
5. Make sure to select the replication slot from the dropdown list you created in the prerequisites step.
7. You can select the tables you want to replicate from the source Postgres database. While selecting the tables, you can also choose to rename the tables in the destination ClickHouse database as well as exclude specific columns.
133
133
@@ -141,7 +141,7 @@ You can configure the Advanced settings if needed. A brief description of each s
141
141
142
142
8. Select the "Full access" role from the permissions dropdown and click "Complete Setup".
0 commit comments