Skip to content

Commit 65e091c

Browse files
committed
Merge branch 'image_component' of https://github.com/ClickHouse/clickhouse-docs into image_component
2 parents 0c866ed + 020f48f commit 65e091c

File tree

16 files changed

+185
-196
lines changed

16 files changed

+185
-196
lines changed

docs/_snippets/_gather_your_details_http.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
import cloud_connect_button from '@site/static/images/_snippets/cloud-connect-button.png';
22
import connection_details_https from '@site/static/images/_snippets/connection-details-https.png';
3+
import Image from '@theme/IdealImage';
34

45
To connect to ClickHouse with HTTP(S) you need this information:
56

@@ -11,10 +12,10 @@ To connect to ClickHouse with HTTP(S) you need this information:
1112

1213
The details for your ClickHouse Cloud service are available in the ClickHouse Cloud console. Select the service that you will connect to and click **Connect**:
1314

14-
<img src={cloud_connect_button} class="image" alt="ClickHouse Cloud service connect button" />
15+
<Image img={cloud_connect_button} size="md" alt="ClickHouse Cloud service connect button" />
1516

1617
Choose **HTTPS**, and the details are available in an example `curl` command.
1718

18-
<img src={connection_details_https} class="image" alt="ClickHouse Cloud HTTPS connection details" />
19+
<Image img={connection_details_https} size="md" alt="ClickHouse Cloud HTTPS connection details" />
1920

2021
If you are using self-managed ClickHouse, the connection details are set by your ClickHouse administrator.

docs/_snippets/_gather_your_details_native.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
import cloud_connect_button from '@site/static/images/_snippets/cloud-connect-button.png';
22
import connection_details_native from '@site/static/images/_snippets/connection-details-native.png';
3+
import Image from '@theme/IdealImage';
4+
35

46
To connect to ClickHouse with native TCP you need this information:
57

@@ -11,10 +13,10 @@ To connect to ClickHouse with native TCP you need this information:
1113

1214
The details for your ClickHouse Cloud service are available in the ClickHouse Cloud console. Select the service that you will connect to and click **Connect**:
1315

14-
<img src={cloud_connect_button} class="image" alt="ClickHouse Cloud service connect button" />
16+
<Image img={cloud_connect_button} size="md" alt="ClickHouse Cloud service connect button" />
1517

1618
Choose **Native**, and the details are available in an example `clickhouse-client` command.
1719

18-
<img src={connection_details_native} class="image" alt="ClickHouse Cloud Native TCP connection details" />
20+
<Image img={connection_details_native} size="md" alt="ClickHouse Cloud Native TCP connection details" />
1921

2022
If you are using self-managed ClickHouse, the connection details are set by your ClickHouse administrator.

docs/integrations/data-ingestion/clickpipes/index.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,22 +17,23 @@ import Postgressvg from '@site/static/images/integrations/logos/postgresql.svg';
1717
import redpanda_logo from '@site/static/images/integrations/logos/logo_redpanda.png';
1818
import clickpipes_stack from '@site/static/images/integrations/data-ingestion/clickpipes/clickpipes_stack.png';
1919
import cp_custom_role from '@site/static/images/integrations/data-ingestion/clickpipes/cp_custom_role.png';
20+
import Image from '@theme/IdealImage';
2021

2122
# Integrating with ClickHouse Cloud
2223

2324
## Introduction {#introduction}
2425

2526
[ClickPipes](/integrations/clickpipes) is a managed integration platform that makes ingesting data from a diverse set of sources as simple as clicking a few buttons. Designed for the most demanding workloads, ClickPipes's robust and scalable architecture ensures consistent performance and reliability. ClickPipes can be used for long-term streaming needs or one-time data loading job.
2627

27-
<img src={clickpipes_stack} alt="ClickPipes stack" />
28+
<Image img={clickpipes_stack} alt="ClickPipes stack" size="lg" />
2829

2930
## Supported Data Sources {#supported-data-sources}
3031

3132
| Name |Logo|Type| Status | Description |
3233
|----------------------|----|----|-----------------|------------------------------------------------------------------------------------------------------|
3334
| Apache Kafka |<Kafkasvg class="image" alt="Apache Kafka logo" style={{width: '3rem', 'height': '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Apache Kafka into ClickHouse Cloud. |
3435
| Confluent Cloud |<Confluentsvg class="image" alt="Confluent Cloud logo" style={{width: '3rem'}}/>|Streaming| Stable | Unlock the combined power of Confluent and ClickHouse Cloud through our direct integration. |
35-
| Redpanda |<img src={redpanda_logo} class="image" alt="Redpanda logo" style={{width: '2.5rem', 'background-color': 'transparent'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Redpanda into ClickHouse Cloud. |
36+
| Redpanda |<Image img={redpanda_logo} size="logo" alt="Redpanda logo"/> |Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Redpanda into ClickHouse Cloud. |
3637
| AWS MSK |<Msksvg class="image" alt="AWS MSK logo" style={{width: '3rem', 'height': '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from AWS MSK into ClickHouse Cloud. |
3738
| Azure Event Hubs |<Azureeventhubssvg class="image" alt="Azure Event Hubs logo" style={{width: '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Azure Event Hubs into ClickHouse Cloud. |
3839
| WarpStream |<Warpstreamsvg class="image" alt="WarpStream logo" style={{width: '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from WarpStream into ClickHouse Cloud. |
@@ -66,7 +67,7 @@ Steps:
6667
1. create a custom role `CREATE ROLE my_clickpipes_role SETTINGS ...`. See [CREATE ROLE](/sql-reference/statements/create/role.md) syntax for details.
6768
2. add the custom role to ClickPipes user on step `Details and Settings` during the ClickPipes creation.
6869

69-
<img src={cp_custom_role} alt="Assign a custom role" />
70+
<Image img={cp_custom_role} alt="Assign a custom role" size="lg" />
7071

7172
## Error reporting {#error-reporting}
7273
ClickPipes will create a table next to your destination table with the postfix `<destination_table_name>_clickpipes_error`. This table will contain any errors from the operations of your ClickPipe (network, connectivity, etc.) and also any data that don't conform to the schema. The error table has a [TTL](/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) of 7 days.

docs/integrations/data-ingestion/clickpipes/kafka.md

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ import cp_success from '@site/static/images/integrations/data-ingestion/clickpip
2525
import cp_remove from '@site/static/images/integrations/data-ingestion/clickpipes/cp_remove.png';
2626
import cp_destination from '@site/static/images/integrations/data-ingestion/clickpipes/cp_destination.png';
2727
import cp_overview from '@site/static/images/integrations/data-ingestion/clickpipes/cp_overview.png';
28+
import Image from '@theme/IdealImage';
2829

2930
# Integrating Kafka with ClickHouse Cloud
3031
## Prerequisite {#prerequisite}
@@ -34,20 +35,20 @@ You have familiarized yourself with the [ClickPipes intro](./index.md).
3435

3536
1. Access the SQL Console for your ClickHouse Cloud Service.
3637

37-
<img src={cp_service} alt="ClickPipes service" />
38+
<Image img={cp_service} alt="ClickPipes service" size="lg"/>
3839

3940

4041
2. Select the `Data Sources` button on the left-side menu and click on "Set up a ClickPipe"
4142

42-
<img src={cp_step0} alt="Select imports" />
43+
<Image img={cp_step0} alt="Select imports" size="lg"/>
4344

4445
3. Select your data source.
4546

46-
<img src={cp_step1} alt="Select data source type" />
47+
<Image img={cp_step1} alt="Select data source type" size="lg"/>
4748

4849
4. Fill out the form by providing your ClickPipe with a name, a description (optional), your credentials, and other connection details.
4950

50-
<img src={cp_step2} alt="Fill out connection details" />
51+
<Image img={cp_step2} alt="Fill out connection details" size="lg"/>
5152

5253
5. Configure the schema registry. A valid schema is required for Avro streams and optional for JSON. This schema will be used to parse [AvroConfluent](../../../interfaces/formats.md/#data-format-avro-confluent) or validate JSON messages on the selected topic.
5354
- Avro messages that cannot be parsed or JSON messages that fail validation will generate an error.
@@ -62,41 +63,41 @@ without an embedded schema id, then the specific schema ID or subject must be sp
6263

6364
6. Select your topic and the UI will display a sample document from the topic.
6465

65-
<img src={cp_step3} alt="Set data format and topic" />
66+
<Image img={cp_step3} alt="Set data format and topic" size="lg"/>
6667

6768
7. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
6869

69-
<img src={cp_step4a} alt="Set table, schema, and settings" />
70+
<Image img={cp_step4a} alt="Set table, schema, and settings" size="lg"/>
7071

7172
You can also customize the advanced settings using the controls provided
7273

73-
<img src={cp_step4a3} alt="Set advanced controls" />
74+
<Image img={cp_step4a3} alt="Set advanced controls" size="lg"/>
7475

7576
8. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
7677

77-
<img src={cp_step4b} alt="Use an existing table" />
78+
<Image img={cp_step4b} alt="Use an existing table" size="lg"/>
7879

7980
9. Finally, you can configure permissions for the internal ClickPipes user.
8081

8182
**Permissions:** ClickPipes will create a dedicated user for writing data into a destination table. You can select a role for this internal user using a custom role or one of the predefined role:
8283
- `Full access`: with the full access to the cluster. It might be useful if you use Materialized View or Dictionary with the destination table.
8384
- `Only destination table`: with the `INSERT` permissions to the destination table only.
8485

85-
<img src={cp_step5} alt="Permissions" />
86+
<Image img={cp_step5} alt="Permissions" size="lg"/>
8687

8788
10. By clicking on "Complete Setup", the system will register you ClickPipe, and you'll be able to see it listed in the summary table.
8889

89-
<img src={cp_success} alt="Success notice" />
90+
<Image img={cp_success} alt="Success notice" size="sm"/>
9091

91-
<img src={cp_remove} alt="Remove notice" />
92+
<Image img={cp_remove} alt="Remove notice" size="lg"/>
9293

9394
The summary table provides controls to display sample data from the source or the destination table in ClickHouse
9495

95-
<img src={cp_destination} alt="View destination" />
96+
<Image img={cp_destination} alt="View destination" size="lg"/>
9697

9798
As well as controls to remove the ClickPipe and display a summary of the ingest job.
9899

99-
<img src={cp_overview} alt="View overview" />
100+
<Image img={cp_overview} alt="View overview" size="lg"/>
100101

101102
11. **Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source.
102103

@@ -106,7 +107,7 @@ without an embedded schema id, then the specific schema ID or subject must be sp
106107
|----------------------|----|----|-----------------|------------------------------------------------------------------------------------------------------|
107108
| Apache Kafka |<Kafkasvg class="image" alt="Apache Kafka logo" style={{width: '3rem', 'height': '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Apache Kafka into ClickHouse Cloud. |
108109
| Confluent Cloud |<Confluentsvg class="image" alt="Confluent Cloud logo" style={{width: '3rem'}}/>|Streaming| Stable | Unlock the combined power of Confluent and ClickHouse Cloud through our direct integration. |
109-
| Redpanda |<img src={redpanda_logo} class="image" alt="Redpanda logo" style={{width: '2.5rem', 'background-color': 'transparent'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Redpanda into ClickHouse Cloud. |
110+
| Redpanda |<Image img={redpanda_logo} size="logo" alt="Redpanda logo"/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Redpanda into ClickHouse Cloud. |
110111
| AWS MSK |<Msksvg class="image" alt="AWS MSK logo" style={{width: '3rem', 'height': '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from AWS MSK into ClickHouse Cloud. |
111112
| Azure Event Hubs |<Azureeventhubssvg class="image" alt="Azure Event Hubs logo" style={{width: '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from Azure Event Hubs into ClickHouse Cloud. |
112113
| WarpStream |<Warpstreamsvg class="image" alt="WarpStream logo" style={{width: '3rem'}}/>|Streaming| Stable | Configure ClickPipes and start ingesting streaming data from WarpStream into ClickHouse Cloud. |

docs/integrations/data-ingestion/clickpipes/object-storage.md

Lines changed: 14 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ import cp_success from '@site/static/images/integrations/data-ingestion/clickpip
1919
import cp_remove from '@site/static/images/integrations/data-ingestion/clickpipes/cp_remove.png';
2020
import cp_destination from '@site/static/images/integrations/data-ingestion/clickpipes/cp_destination.png';
2121
import cp_overview from '@site/static/images/integrations/data-ingestion/clickpipes/cp_overview.png';
22+
import Image from '@theme/IdealImage';
2223

2324
# Integrating Object Storage with ClickHouse Cloud
2425
Object Storage ClickPipes provide a simple and resilient way to ingest data from Amazon S3 and Google Cloud Storage into ClickHouse Cloud. Both one-time and continuous ingestion are supported with exactly-once semantics.
@@ -31,31 +32,31 @@ You have familiarized yourself with the [ClickPipes intro](./index.md).
3132

3233
1. In the cloud console, select the `Data Sources` button on the left-side menu and click on "Set up a ClickPipe"
3334

34-
<img src={cp_step0} alt="Select imports" />
35+
<Image img={cp_step0} alt="Select imports" size="lg"/>
3536

3637
2. Select your data source.
3738

38-
<img src={cp_step1} alt="Select data source type" />
39+
<Image img={cp_step1} alt="Select data source type" size="lg"/>
3940

4041
3. Fill out the form by providing your ClickPipe with a name, a description (optional), your IAM role or credentials, and bucket URL. You can specify multiple files using bash-like wildcards. For more information, [see the documentation on using wildcards in path](#limitations).
4142

42-
<img src={cp_step2_object_storage} alt="Fill out connection details" />
43+
<Image img={cp_step2_object_storage} alt="Fill out connection details" size="lg"/>
4344

4445
4. The UI will display a list of files in the specified bucket. Select your data format (we currently support a subset of ClickHouse formats) and if you want to enable continuous ingestion [More details below](#continuous-ingest).
4546

46-
<img src={cp_step3_object_storage} alt="Set data format and topic" />
47+
<Image img={cp_step3_object_storage} alt="Set data format and topic" size="lg"/>
4748

4849
5. In the next step, you can select whether you want to ingest data into a new ClickHouse table or reuse an existing one. Follow the instructions in the screen to modify your table name, schema, and settings. You can see a real-time preview of your changes in the sample table at the top.
4950

50-
<img src={cp_step4a} alt="Set table, schema, and settings" />
51+
<Image img={cp_step4a} alt="Set table, schema, and settings" size="lg"/>
5152

5253
You can also customize the advanced settings using the controls provided
5354

54-
<img src={cp_step4a3} alt="Set advanced controls" />
55+
<Image img={cp_step4a3} alt="Set advanced controls" size="lg"/>
5556

5657
6. Alternatively, you can decide to ingest your data in an existing ClickHouse table. In that case, the UI will allow you to map fields from the source to the ClickHouse fields in the selected destination table.
5758

58-
<img src={cp_step4b} alt="Use an existing table" />
59+
<Image img={cp_step4b} alt="Use an existing table" size="lg"/>
5960

6061
:::info
6162
You can also map [virtual columns](../../sql-reference/table-functions/s3#virtual-columns), like `_path` or `_size`, to fields.
@@ -67,22 +68,22 @@ You can also map [virtual columns](../../sql-reference/table-functions/s3#virtua
6768
- `Full access`: with the full access to the cluster. Required if you use Materialized View or Dictionary with the destination table.
6869
- `Only destination table`: with the `INSERT` permissions to the destination table only.
6970

70-
<img src={cp_step5} alt="Permissions" />
71+
<Image img={cp_step5} alt="Permissions" size="lg"/>
7172

7273
8. By clicking on "Complete Setup", the system will register you ClickPipe, and you'll be able to see it listed in the summary table.
7374

74-
<img src={cp_success} alt="Success notice" />
75+
<Image img={cp_success} alt="Success notice" size="sm"/>
7576

76-
<img src={cp_remove} alt="Remove notice" />
77+
<Image img={cp_remove} alt="Remove notice" size="lg"/>
7778

7879
The summary table provides controls to display sample data from the source or the destination table in ClickHouse
7980

80-
<img src={cp_destination} alt="View destination" />
81+
<Image img={cp_destination} alt="View destination" size="lg"/>
8182

8283
As well as controls to remove the ClickPipe and display a summary of the ingest job.
8384

84-
<img src={cp_overview} alt="View overview" />
85-
85+
<Image img={cp_overview} alt="View overview" size="lg"/>
86+
Image
8687
9. **Congratulations!** you have successfully set up your first ClickPipe. If this is a streaming ClickPipe it will be continuously running, ingesting data in real-time from your remote data source. Otherwise it will ingest the batch and complete.
8788

8889
## Supported Data Sources {#supported-data-sources}

docs/integrations/data-ingestion/dbms/dynamodb/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
1212
import dynamodb_kinesis_stream from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-kinesis-stream.png';
1313
import dynamodb_s3_export from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-s3-export.png';
1414
import dynamodb_map_columns from '@site/static/images/integrations/data-ingestion/dbms/dynamodb/dynamodb-map-columns.png';
15+
import Image from '@theme/IdealImage';
1516

1617
# CDC from DynamoDB to ClickHouse
1718

@@ -31,14 +32,14 @@ Data will be ingested into a `ReplacingMergeTree`. This table engine is commonly
3132
First, you will want to enable a Kinesis stream on your DynamoDB table to capture changes in real-time. We want to do this before we create the snapshot to avoid missing any data.
3233
Find the AWS guide located [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/kds.html).
3334

34-
<img src={dynamodb_kinesis_stream} alt="DynamoDB Kinesis Stream"/>
35+
<Image img={dynamodb_kinesis_stream} size="lg" alt="DynamoDB Kinesis Stream" />
3536

3637
## 2. Create the snapshot {#2-create-the-snapshot}
3738

3839
Next, we will create a snapshot of the DynamoDB table. This can be achieved through an AWS export to S3. Find the AWS guide located [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/S3DataExport.HowItWorks.html).
3940
**You will want to do a "Full export" in the DynamoDB JSON format.**
4041

41-
<img src={dynamodb_s3_export} alt="DynamoDB S3 Export"/>
42+
<Image img={dynamodb_s3_export} size="lg" alt="DynamoDB S3 Export"/>
4243

4344
## 3. Load the snapshot into ClickHouse {#3-load-the-snapshot-into-clickhouse}
4445

@@ -91,7 +92,7 @@ CREATE TABLE IF NOT EXISTS "default"."destination" (
9192
"first_name" String,
9293
"age" Int8,
9394
"version" Int64
94-
)
95+
)
9596
ENGINE ReplacingMergeTree("version")
9697
ORDER BY id;
9798
```
@@ -128,7 +129,7 @@ Now we can set up the Kinesis ClickPipe to capture real-time changes from the Ki
128129
- `ApproximateCreationDateTime`: `version`
129130
- Map other fields to the appropriate destination columns as shown below
130131

131-
<img src={dynamodb_map_columns} alt="DynamoDB Map Columns"/>
132+
<Image img={dynamodb_map_columns} size="lg" alt="DynamoDB Map Columns"/>
132133

133134
## 5. Cleanup (optional) {#5-cleanup-optional}
134135

@@ -139,4 +140,3 @@ DROP TABLE IF EXISTS "default"."snapshot";
139140
DROP TABLE IF EXISTS "default"."snapshot_clickpipes_error";
140141
DROP VIEW IF EXISTS "default"."snapshot_mv";
141142
```
142-

0 commit comments

Comments
 (0)