You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|`topics` (Required) | The Kafka topics to poll - topic names must match table names |`""`|
106
-
|`key.converter` (Required* - See Description) | Set according to the types of your keys. Required here if you are passing keys (and not defined in worker config). |`"org.apache.kafka.connect.storage.StringConverter"`|
107
-
|`value.converter` (Required* - See Description) | Set based on the type of data on your topic. Supported: - JSON, String, Avro or Protobuf formats. Required here if not defined in worker config. |`"org.apache.kafka.connect.json.JsonConverter"`|
108
-
|`value.converter.schemas.enable`| Connector Value Converter Schema Support |`"false"`|
109
-
|`errors.tolerance`| Connector Error Tolerance. Supported: none, all |`"none"`|
110
-
|`errors.deadletterqueue.topic.name`| If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#troubleshooting)) |`""`|
111
-
|`errors.deadletterqueue.context.headers.enable`| Adds additional headers for the DLQ |`""`|
112
-
|`clickhouseSettings`| Comma-separated list of ClickHouse settings (e.g. "insert_quorum=2, etc...") |`""`|
113
-
|`topic2TableMap`| Comma-separated list that maps topic names to table names (e.g. "topic1=table1, topic2=table2, etc...") |`""`|
114
-
|`tableRefreshInterval`| Time (in seconds) to refresh the table definition cache |`0`|
115
-
|`keeperOnCluster`| Allows configuration of ON CLUSTER parameter for self-hosted instances (e.g. `ON CLUSTER clusterNameInConfigFileDefinition`) for exactly-once connect_state table (see [Distributed DDL Queries](/sql-reference/distributed-ddl)|`""`|
116
-
|`bypassRowBinary`| Allows disabling use of RowBinary and RowBinaryWithDefaults for Schema-based data (Avro, Protobuf, etc.) - should only be used when data will have missing columns, and Nullable/Default are unacceptable |`"false"`|
117
-
|`dateTimeFormats`| Date time formats for parsing DateTime64 schema fields, separated by `;` (e.g. `someDateField=yyyy-MM-dd HH:mm:ss.SSSSSSSSS;someOtherDateField=yyyy-MM-dd HH:mm:ss`). |`""`|
118
-
|`tolerateStateMismatch`| Allows the connector to drop records "earlier" than the current offset stored AFTER_PROCESSING (e.g. if offset 5 is sent, and offset 250 was the last recorded offset) |`"false"`|
119
-
|`ignorePartitionsWhenBatching`| Will ignore partition when collecting messages for insert (though only if exactlyOnce is `false`). Performance Note: The more connector tasks, the fewer kafka partitions assigned per task - this can mean diminishing returns. |`"false"`|
|`topics` (Required) | The Kafka topics to poll - topic names must match table names |`""`|
106
+
|`key.converter` (Required* - See Description) | Set according to the types of your keys. Required here if you are passing keys (and not defined in worker config). |`"org.apache.kafka.connect.storage.StringConverter"`|
107
+
|`value.converter` (Required* - See Description) | Set based on the type of data on your topic. Supported: - JSON, String, Avro or Protobuf formats. Required here if not defined in worker config. |`"org.apache.kafka.connect.json.JsonConverter"`|
108
+
|`value.converter.schemas.enable`| Connector Value Converter Schema Support |`"false"`|
109
+
|`errors.tolerance`| Connector Error Tolerance. Supported: none, all |`"none"`|
110
+
|`errors.deadletterqueue.topic.name`| If set (with errors.tolerance=all), a DLQ will be used for failed batches (see [Troubleshooting](#troubleshooting)) |`""`|
111
+
|`errors.deadletterqueue.context.headers.enable`| Adds additional headers for the DLQ |`""`|
112
+
|`clickhouseSettings`| Comma-separated list of ClickHouse settings (e.g. "insert_quorum=2, etc...") |`""`|
113
+
|`topic2TableMap`| Comma-separated list that maps topic names to table names (e.g. "topic1=table1, topic2=table2, etc...") |`""`|
114
+
|`tableRefreshInterval`| Time (in seconds) to refresh the table definition cache |`0`|
115
+
|`keeperOnCluster`| Allows configuration of ON CLUSTER parameter for self-hosted instances (e.g. `ON CLUSTER clusterNameInConfigFileDefinition`) for exactly-once connect_state table (see [Distributed DDL Queries](/sql-reference/distributed-ddl)|`""`|
116
+
|`bypassRowBinary`| Allows disabling use of RowBinary and RowBinaryWithDefaults for Schema-based data (Avro, Protobuf, etc.) - should only be used when data will have missing columns, and Nullable/Default are unacceptable |`"false"`|
117
+
|`dateTimeFormats`| Date time formats for parsing DateTime64 schema fields, separated by `;` (e.g. `someDateField=yyyy-MM-dd HH:mm:ss.SSSSSSSSS;someOtherDateField=yyyy-MM-dd HH:mm:ss`). |`""`|
118
+
|`tolerateStateMismatch`| Allows the connector to drop records "earlier" than the current offset stored AFTER_PROCESSING (e.g. if offset 5 is sent, and offset 250 was the last recorded offset) |`"false"`|
119
+
|`ignorePartitionsWhenBatching`| Will ignore partition when collecting messages for insert (though only if `exactlyOnce` is `false`). Performance Note: The more connector tasks, the fewer kafka partitions assigned per task - this can mean diminishing returns. |`"false"`|
0 commit comments