Skip to content

Commit a80b323

Browse files
committed
Update anchors for formats page restructure
1 parent 81ec9d1 commit a80b323

File tree

5 files changed

+5
-5
lines changed

5 files changed

+5
-5
lines changed

docs/cloud/onboard/02_migrate/01_migration_guides/08_other_methods/03_object-storage-to-clickhouse.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,6 @@ from you current database system to Cloud Object Storage, in order to migrate th
2626

2727
Although this is a two steps process (offload data into a Cloud Object Storage, then load into ClickHouse), the advantage is that this
2828
scales to petabytes thanks to a [solid ClickHouse Cloud](https://clickhouse.com/blog/getting-data-into-clickhouse-part-3-s3) support of highly-parallel reads from Cloud Object Storage.
29-
Also you can leverage sophisticated and compressed formats like [Parquet](/interfaces/formats/#data-format-parquet).
29+
Also you can leverage sophisticated and compressed formats like [Parquet](/interfaces/formats/Parquet).
3030

3131
There is a [blog article](https://clickhouse.com/blog/getting-data-into-clickhouse-part-3-s3) with concrete code examples showing how you can get data into ClickHouse Cloud using S3.

docs/faq/integration/json-import.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ doc_type: 'guide'
99

1010
# How to Import JSON Into ClickHouse? {#how-to-import-json-into-clickhouse}
1111

12-
ClickHouse supports a wide range of [data formats for input and output](../../interfaces/formats.md). There are multiple JSON variations among them, but the most commonly used for data ingestion is [JSONEachRow](../../interfaces/formats.md#jsoneachrow). It expects one JSON object per row, each object separated by a newline.
12+
ClickHouse supports a wide range of [data formats for input and output](/interfaces/formats). There are multiple JSON variations among them, but the most commonly used for data ingestion is [JSONEachRow](/interfaces/formats/JSONEachRow). It expects one JSON object per row, each object separated by a newline.
1313

1414
## Examples {#examples}
1515

docs/getting-started/example-datasets/menus.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_defa
121121
clickhouse-client --format_csv_allow_single_quotes 0 --input_format_null_as_default 0 --date_time_input_format best_effort --query "INSERT INTO menu_item FORMAT CSVWithNames" < MenuItem.csv
122122
```
123123

124-
We use [CSVWithNames](../../interfaces/formats.md#csvwithnames) format as the data is represented by CSV with header.
124+
We use [CSVWithNames](/interfaces/formats/CSVWithNames) format as the data is represented by CSV with header.
125125

126126
We disable `format_csv_allow_single_quotes` as only double quotes are used for data fields and single quotes can be inside the values and should not confuse the CSV parser.
127127

docs/integrations/data-ingestion/kafka/confluent/kafka-connect-http.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ The following additional parameters are relevant to using the HTTP Sink with Cli
146146
* `batch.max.size` - The number of rows to send in a single batch. Ensure this set is to an appropriately large number. Per ClickHouse [recommendations](/sql-reference/statements/insert-into#performance-considerations) a value of 1000 should be considered a minimum.
147147
* `tasks.max` - The HTTP Sink connector supports running one or more tasks. This can be used to increase performance. Along with batch size this represents your primary means of improving performance.
148148
* `key.converter` - set according to the types of your keys.
149-
* `value.converter` - set based on the type of data on your topic. This data does not need a schema. The format here must be consistent with the FORMAT specified in the parameter `http.api.url`. The simplest here is to use JSON and the org.apache.kafka.connect.json.JsonConverter converter. Treating the value as a string, via the converter org.apache.kafka.connect.storage.StringConverter, is also possible - although this will require the user to extract a value in the insert statement using functions. [Avro format](../../../../interfaces/formats.md#data-format-avro) is also supported in ClickHouse if using the io.confluent.connect.avro.AvroConverter converter.
149+
* `value.converter` - set based on the type of data on your topic. This data does not need a schema. The format here must be consistent with the FORMAT specified in the parameter `http.api.url`. The simplest here is to use JSON and the org.apache.kafka.connect.json.JsonConverter converter. Treating the value as a string, via the converter org.apache.kafka.connect.storage.StringConverter, is also possible - although this will require the user to extract a value in the insert statement using functions. [Avro format](/interfaces/formats/Avro) is also supported in ClickHouse if using the io.confluent.connect.avro.AvroConverter converter.
150150

151151
A full list of settings, including how to configure a proxy, retries, and advanced SSL, can be found [here](https://docs.confluent.io/kafka-connect-http/current/connector_config.html).
152152

docs/integrations/data-ingestion/s3/performance.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ FROM s3('https://datasets-documentation.s3.eu-west-3.amazonaws.com/stackoverflow
182182
0 rows in set. Elapsed: 191.692 sec. Processed 59.82 million rows, 24.03 GB (312.06 thousand rows/s., 125.37 MB/s.)
183183
```
184184

185-
In our example we only return a few rows. If measuring the performance of `SELECT` queries, where large volumes of data are returned to the client, either utilize the [null format](/interfaces/formats/#null) for queries or direct results to the [`Null` engine](/engines/table-engines/special/null.md). This should avoid the client being overwhelmed with data and network saturation.
185+
In our example we only return a few rows. If measuring the performance of `SELECT` queries, where large volumes of data are returned to the client, either utilize the [null format](/interfaces/formats/Null) for queries or direct results to the [`Null` engine](/engines/table-engines/special/null.md). This should avoid the client being overwhelmed with data and network saturation.
186186

187187
:::info
188188
When reading from queries, the initial query can often appear slower than if the same query is repeated. This can be attributed to both S3's own caching but also the [ClickHouse Schema Inference Cache](/operations/system-tables/schema_inference_cache). This stores the inferred schema for files and means the inference step can be skipped on subsequent accesses, thus reducing query time.

0 commit comments

Comments
 (0)