You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/clickpipes/kafka/03_reference.md
+3-17Lines changed: 3 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ The supported formats are:
40
40
41
41
The following standard ClickHouse data types are currently supported in ClickPipes:
42
42
43
-
- Base numeric types - \[U\]Int8/16/32/64 and Float32/64
43
+
- Base numeric types - \[U\]Int8/16/32/64, Float32/64, and BFloat16
44
44
- Large integer types - \[U\]Int128/256
45
45
- Decimal Types
46
46
- Boolean
@@ -55,30 +55,22 @@ The following standard ClickHouse data types are currently supported in ClickPip
55
55
- all ClickHouse LowCardinality types
56
56
- Map with keys and values using any of the above types (including Nullables)
57
57
- Tuple and Array with elements using any of the above types (including Nullables, one level depth only)
58
+
- SimpleAggregateFunction types (for AggregatingMergeTree or SummingMergeTree destinations)
58
59
59
60
### Avro {#avro}
60
61
61
62
#### Supported Avro Data Types {#supported-avro-data-types}
62
-
63
63
ClickPipes supports all Avro Primitive and Complex types, and all Avro Logical types except `time-millis`, `time-micros`, `local-timestamp-millis`, `local_timestamp-micros`, and `duration`. Avro `record` types are converted to Tuple, `array` types to Array, and `map` to Map (string keys only). In general the conversions listed [here](/interfaces/formats/Avro#data-type-mapping) are available. We recommend using exact type matching for Avro numeric types, as ClickPipes does not check for overflow or precision loss on type conversion.
64
+
Alternatively, all Avro types can be inserted into a `String` column, and will be represented as a valid JSON string in that case.
64
65
65
66
#### Nullable types and Avro unions {#nullable-types-and-avro-unions}
66
-
67
67
Nullable types in Avro are defined by using a Union schema of `(T, null)` or `(null, T)` where T is the base Avro type. During schema inference, such unions will be mapped to a ClickHouse "Nullable" column. Note that ClickHouse does not support
68
68
`Nullable(Array)`, `Nullable(Map)`, or `Nullable(Tuple)` types. Avro null unions for these types will be mapped to non-nullable versions (Avro Record types are mapped to a ClickHouse named Tuple). Avro "nulls" for these types will be inserted as:
69
69
- An empty Array for a null Avro array
70
70
- An empty Map for a null Avro Map
71
71
- A named Tuple with all default/zero values for a null Avro Record
72
72
73
-
### Experimental {#experimental-types-support}
74
-
75
73
#### Variant type support {#variant-type-support}
76
-
77
-
<ExperimentalBadge/>
78
-
79
-
Variant type support is automatic if your Cloud service is running ClickHouse 25.3 or later. Otherwise, you will
80
-
have to submit a support ticket to enable it on your service.
81
-
82
74
ClickPipes supports the Variant type in the following circumstances:
83
75
- Avro Unions. If your Avro schema contains a union with multiple non-null types, ClickPipes will infer the
84
76
appropriate variant type. Variant types are not otherwise supported for Avro data.
@@ -87,12 +79,6 @@ ClickPipes supports the Variant type in the following circumstances:
87
79
type can be used in the Variant definition - for example, `Variant(Int64, UInt32)` is not supported.
88
80
89
81
#### JSON type support {#json-type-support}
90
-
91
-
<ExperimentalBadge/>
92
-
93
-
JSON type support is automatic if your Cloud service is running ClickHouse 25.3 or later. Otherwise, you will
94
-
have to submit a support ticket to enable it on your service.
95
-
96
82
ClickPipes support the JSON type in the following circumstances:
97
83
- Avro Record types can always be assigned to a JSON column.
98
84
- Avro String and Bytes types can be assigned to a JSON column if the column actually holds JSON String objects.
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/clickpipes/kafka/04_best_practices.md
+3-5Lines changed: 3 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -99,18 +99,16 @@ Role-based access only works for ClickHouse Cloud instances deployed to AWS.
99
99
```
100
100
101
101
### Custom Certificates {#custom-certificates}
102
-
ClickPipes for Kafka supports the upload of custom certificates for Kafka brokers with SASL & public SSL/TLS certificate. You can upload your certificate in the SSL Certificate section of the ClickPipe setup.
103
-
:::note
104
-
Please note that while we support uploading a single SSL certificate along with SASL for Kafka, SSL with Mutual TLS (mTLS) is not supported at this time.
105
-
:::
102
+
ClickPipes for Kafka supports the upload of custom certificates for Kafka brokers which use non-public server certificates.
103
+
Upload of client certificates and keys is also supported for mutual TLS (mTLS) based authentication.
106
104
107
105
## Performance {#performance}
108
106
109
107
### Batching {#batching}
110
108
ClickPipes inserts data into ClickHouse in batches. This is to avoid creating too many parts in the database which can lead to performance issues in the cluster.
111
109
112
110
Batches are inserted when one of the following criteria has been met:
113
-
- The batch size has reached the maximum size (100,000 rows or 20MB)
111
+
- The batch size has reached the maximum size (100,000 rows or 32MB per 1GB of pod memory)
114
112
- The batch has been open for a maximum amount of time (5 seconds)
Copy file name to clipboardExpand all lines: docs/integrations/data-ingestion/clickpipes/kinesis.md
+5-10Lines changed: 5 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -92,7 +92,7 @@ The supported formats are:
92
92
### Standard types support {#standard-types-support}
93
93
The following ClickHouse data types are currently supported in ClickPipes:
94
94
95
-
- Base numeric types - \[U\]Int8/16/32/64 and Float32/64
95
+
- Base numeric types - \[U\]Int8/16/32/64, Float32/64, and BFloat16
96
96
- Large integer types - \[U\]Int128/256
97
97
- Decimal Types
98
98
- Boolean
@@ -107,19 +107,14 @@ The following ClickHouse data types are currently supported in ClickPipes:
107
107
- all ClickHouse LowCardinality types
108
108
- Map with keys and values using any of the above types (including Nullables)
109
109
- Tuple and Array with elements using any of the above types (including Nullables, one level depth only)
110
-
-
111
-
### Variant type support (experimental) {#variant-type-support}
112
-
Variant type support is automatic if your Cloud service is running ClickHouse 25.3 or later. Otherwise, you will
113
-
have to submit a support ticket to enable it on your service.
110
+
- SimpleAggregateFunction types (for AggregatingMergeTree or SummingMergeTree destinations)
114
111
112
+
### Variant type support {#variant-type-support}
115
113
You can manually specify a Variant type (such as `Variant(String, Int64, DateTime)`) for any JSON field
116
114
in the source data stream. Because of the way ClickPipes determines the correct variant subtype to use, only one integer or datetime
117
115
type can be used in the Variant definition - for example, `Variant(Int64, UInt32)` is not supported.
118
116
119
-
### JSON type support (experimental) {#json-type-support}
120
-
JSON type support is automatic if your Cloud service is running ClickHouse 25.3 or later. Otherwise, you will
121
-
have to submit a support ticket to enable it on your service.
122
-
117
+
### JSON type support {#json-type-support}
123
118
JSON fields that are always a JSON object can be assigned to a JSON destination column. You will have to manually change the destination
124
119
column to the desired JSON type, including any fixed or skipped paths.
125
120
@@ -148,7 +143,7 @@ view). For such pipes, it may improve ClickPipes performance to delete all the
148
143
ClickPipes inserts data into ClickHouse in batches. This is to avoid creating too many parts in the database which can lead to performance issues in the cluster.
149
144
150
145
Batches are inserted when one of the following criteria has been met:
151
-
- The batch size has reached the maximum size (100,000 rows or 20MB)
146
+
- The batch size has reached the maximum size (100,000 rows or 32MB per 1GB of replica memory)
152
147
- The batch has been open for a maximum amount of time (5 seconds)
0 commit comments