Skip to content

Commit eeef9d1

Browse files
authored
Merge pull request #3409 from ClickHouse/translate_ja
more images to static
2 parents 3621c92 + 8661d31 commit eeef9d1

25 files changed

+76
-46
lines changed

docs/cloud/bestpractices/asyncinserts.md

Lines changed: 16 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,10 @@ sidebar_label: Asynchronous Inserts
44
title: Asynchronous Inserts (async_insert)
55
---
66

7+
import asyncInsert01 from '@site/static/images/cloud/bestpractices/async-01.png';
8+
import asyncInsert02 from '@site/static/images/cloud/bestpractices/async-02.png';
9+
import asyncInsert03 from '@site/static/images/cloud/bestpractices/async-03.png';
10+
711
Inserting data into ClickHouse in large batches is a best practice. It saves compute cycles and disk I/O, and therefore it saves money. If your use case allows you to batch your inserts external to ClickHouse, then that is one option. If you would like ClickHouse to create the batches, then you can use the asynchronous INSERT mode described here.
812

913
Use asynchronous inserts as an alternative to both batching data on the client-side and keeping the insert rate at around one insert query per second by enabling the [async_insert](/operations/settings/settings.md/#async_insert) setting. This causes ClickHouse to handle the batching on the server-side.
@@ -12,7 +16,10 @@ By default, ClickHouse is writing data synchronously.
1216
Each insert sent to ClickHouse causes ClickHouse to immediately create a part containing the data from the insert.
1317
This is the default behavior when the async_insert setting is set to its default value of 0:
1418

15-
![compression block diagram](images/async-01.png)
19+
<img src={asyncInsert01}
20+
class="image"
21+
alt="Asynchronous insert process - default synchronous inserts"
22+
style={{width: '100%', background: 'none'}} />
1623

1724
By setting async_insert to 1, ClickHouse first stores the incoming inserts into an in-memory buffer before flushing them regularly to disk.
1825

@@ -30,10 +37,15 @@ With the [wait_for_async_insert](/operations/settings/settings.md/#wait_for_asyn
3037

3138
The following two diagrams illustrate the two settings for async_insert and wait_for_async_insert:
3239

33-
![compression block diagram](images/async-02.png)
34-
35-
![compression block diagram](images/async-03.png)
40+
<img src={asyncInsert02}
41+
class="image"
42+
alt="Asynchronous insert process - async_insert=1, wait_for_async_insert=1"
43+
style={{width: '100%', background: 'none'}} />
3644

45+
<img src={asyncInsert03}
46+
class="image"
47+
alt="Asynchronous insert process - async_insert=1, wait_for_async_insert=0"
48+
style={{width: '100%', background: 'none'}} />
3749

3850
### Enabling asynchronous inserts {#enabling-asynchronous-inserts}
3951

docs/cloud/bestpractices/partitioningkey.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,25 @@ sidebar_label: Choose a Low Cardinality Partitioning Key
44
title: Choose a Low Cardinality Partitioning Key
55
---
66

7+
import partitioning01 from '@site/static/images/cloud/bestpractices/partitioning-01.png';
8+
import partitioning02 from '@site/static/images/cloud/bestpractices/partitioning-02.png';
9+
710
When you send an insert statement (that should contain many rows - see [section above](/optimize/bulk-inserts)) to a table in ClickHouse Cloud, and that
811
table is not using a [partitioning key](/engines/table-engines/mergetree-family/custom-partitioning-key.md) then all row data from that insert is written into a new part on storage:
912

10-
![compression block diagram](images/partitioning-01.png)
13+
<img src={partitioning01}
14+
class="image"
15+
alt="Insert without partitioning key - one part created"
16+
style={{width: '100%', background: 'none'}} />
1117

1218
However, when you send an insert statement to a table in ClickHouse Cloud, and that table has a partitioning key, then ClickHouse:
1319
- checks the partitioning key values of the rows contained in the insert
1420
- creates one new part on storage per distinct partitioning key value
1521
- places the rows in the corresponding parts by partitioning key value
1622

17-
![compression block diagram](images/partitioning-02.png)
23+
<img src={partitioning02}
24+
class="image"
25+
alt="Insert with partitioning key - multiple parts created based on partitioning key values"
26+
style={{width: '100%', background: 'none'}} />
1827

1928
Therefore, to minimize the number of write requests to the ClickHouse Cloud object storage, use a low cardinality partitioning key or avoid using any partitioning key for your table.

docs/cloud/manage/jan2025_faq/dimensions.md

Lines changed: 32 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,9 @@ keywords: [new pricing, dimensions]
55
description: Pricing dimensions for data transfer and ClickPipes
66
---
77

8+
import clickpipesPricingFaq1 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_1.png';
9+
import clickpipesPricingFaq2 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_2.png';
10+
import clickpipesPricingFaq3 from '@site/static/images/cloud/manage/jan2025_faq/external_clickpipes_pricing_faq_3.png';
811
import NetworkPricing from '@site/docs/cloud/manage/_snippets/_network_transfer_rates.md';
912

1013

@@ -34,60 +37,60 @@ Data transfer prices will **not** be tiered as usage increases. Note that the pr
3437
### Why are we introducing a pricing model for ClickPipes now? {#why-are-we-introducing-a-pricing-model-for-clickpipes-now}
3538

3639
We decided to initially launch ClickPipes for free with the idea to gather feedback, refine features,
37-
and ensure it meets user needs.
38-
As the GA platform has grown and effectively stood the test of time by moving trillions of rows,
39-
introducing a pricing model allows us to continue improving the service,
40+
and ensure it meets user needs.
41+
As the GA platform has grown and effectively stood the test of time by moving trillions of rows,
42+
introducing a pricing model allows us to continue improving the service,
4043
maintaining the infrastructure, and providing dedicated support and new connectors.
4144

4245
### What are ClickPipes replicas? {#what-are-clickpipes-replicas}
4346

44-
ClickPipes ingests data from remote data sources via a dedicated infrastructure
45-
that runs and scales independently of the ClickHouse Cloud service.
46-
For this reason, it uses dedicated compute replicas.
47+
ClickPipes ingests data from remote data sources via a dedicated infrastructure
48+
that runs and scales independently of the ClickHouse Cloud service.
49+
For this reason, it uses dedicated compute replicas.
4750
The diagrams below show a simplified architecture.
4851

49-
For streaming ClickPipes, ClickPipes replicas access the remote data sources (e.g., a Kafka broker),
52+
For streaming ClickPipes, ClickPipes replicas access the remote data sources (e.g., a Kafka broker),
5053
pull the data, process and ingest it into the destination ClickHouse service.
5154

52-
![ClickPipes Replicas - Streaming ClickPipes](images/external_clickpipes_pricing_faq_1.png)
55+
<img src={clickpipesPricingFaq1} alt="ClickPipes Replicas - Streaming ClickPipes" />
5356

54-
In the case of object storage ClickPipes,
55-
the ClickPipes replica orchestrates the data loading task
56-
(identifying files to copy, maintaining the state, and moving partitions),
57+
In the case of object storage ClickPipes,
58+
the ClickPipes replica orchestrates the data loading task
59+
(identifying files to copy, maintaining the state, and moving partitions),
5760
while the data is pulled directly from the ClickHouse service.
5861

59-
![ClickPipes Replicas - Object Storage ClickPipes](images/external_clickpipes_pricing_faq_2.png)
62+
<img src={clickpipesPricingFaq2} alt="ClickPipes Replicas - Object Storage ClickPipes" />
6063

6164
### What is the default number of replicas and their size? {#what-is-the-default-number-of-replicas-and-their-size}
6265

63-
Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
66+
Each ClickPipe defaults to 1 replica that is provided with 2 GiB of RAM and 0.5 vCPU.
6467
This corresponds to **0.25** ClickHouse compute units (1 unit = 8 GiB RAM, 2 vCPUs).
6568

6669
### Can ClickPipes replicas be scaled? {#can-clickpipes-replicas-be-scaled}
6770

68-
Currently, only ClickPipes for streaming can be scaled horizontally
69-
by adding more replicas each with a base unit of **0.25** ClickHouse compute units.
71+
Currently, only ClickPipes for streaming can be scaled horizontally
72+
by adding more replicas each with a base unit of **0.25** ClickHouse compute units.
7073
Vertical scaling is also available on demand for specific use cases (adding more CPU and RAM per replica).
7174

7275
### How many ClickPipes replicas do I need? {#how-many-clickpipes-replicas-do-i-need}
7376

74-
It depends on the workload throughput and latency requirements.
75-
We recommend starting with the default value of 1 replica, measuring your latency, and adding replicas if needed.
76-
Keep in mind that for Kafka ClickPipes, you also have to scale the Kafka broker partitions accordingly.
77+
It depends on the workload throughput and latency requirements.
78+
We recommend starting with the default value of 1 replica, measuring your latency, and adding replicas if needed.
79+
Keep in mind that for Kafka ClickPipes, you also have to scale the Kafka broker partitions accordingly.
7780
The scaling controls are available under “settings” for each streaming ClickPipe.
7881

79-
![ClickPipes Replicas - How many ClickPipes replicas do I need?](images/external_clickpipes_pricing_faq_3.png)
82+
<img src={clickpipesPricingFaq3} alt="ClickPipes Replicas - How many ClickPipes replicas do I need?" />
8083

8184
### What does the ClickPipes pricing structure look like? {#what-does-the-clickpipes-pricing-structure-look-like}
8285

8386
It consists of two dimensions:
84-
- **Compute**: Price per unit per hour
85-
Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
87+
- **Compute**: Price per unit per hour
88+
Compute represents the cost of running the ClickPipes replica pods whether they actively ingest data or not.
8689
It applies to all ClickPipes types.
87-
- **Ingested data**: per GB pricing
88-
The ingested data rate applies to all streaming ClickPipes
90+
- **Ingested data**: per GB pricing
91+
The ingested data rate applies to all streaming ClickPipes
8992
(Kafka, Confluent, Amazon MSK, Amazon Kinesis, Redpanda, WarpStream,
90-
Azure Event Hubs) for the data transferred via the replica pods.
93+
Azure Event Hubs) for the data transferred via the replica pods.
9194
The ingested data size (GB) is charged based on bytes received from the source (uncompressed or compressed).
9295

9396
### What are the ClickPipes public prices? {#what-are-the-clickpipes-public-prices}
@@ -103,8 +106,8 @@ $$
103106
(0.25 \times 0.20 \times 24) + (0.04 \times 1000) = \$41.2
104107
$$
105108

106-
For object storage connectors (S3 and GCS),
107-
only the ClickPipes compute cost is incurred since the ClickPipes pod is not processing data
109+
For object storage connectors (S3 and GCS),
110+
only the ClickPipes compute cost is incurred since the ClickPipes pod is not processing data
108111
but only orchestrating the transfer which is operated by the underlying ClickHouse service:
109112

110113
$$
@@ -117,13 +120,11 @@ The new pricing model will take effect for all organizations created after Janua
117120

118121
### What happens to current users? {#what-happens-to-current-users}
119122

120-
Existing users will have a **60-day grace period** where the ClickPipes service continues to be offered for free.
123+
Existing users will have a **60-day grace period** where the ClickPipes service continues to be offered for free.
121124
Billing will automatically start for ClickPipes for existing users on **March 24th, 2025.**
122125

123126
### How does ClickPipes pricing compare to the market? {#how-does-clickpipes-pricing-compare-to-the-market}
124127

125-
The philosophy behind ClickPipes pricing is
126-
to cover the operating costs of the platform while offering an easy and reliable way to move data to ClickHouse Cloud.
128+
The philosophy behind ClickPipes pricing is
129+
to cover the operating costs of the platform while offering an easy and reliable way to move data to ClickHouse Cloud.
127130
From that angle, our market analysis revealed that we are positioned competitively.
128-
129-

docs/dictionary/index.md

Lines changed: 14 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,9 @@ keywords: [dictionary, dictionaries]
55
description: A dictionary provides a key-value representation of data for fast lookups.
66
---
77

8+
import dictionaryUseCases from '@site/static/images/dictionary/dictionary-use-cases.png';
9+
import dictionaryLeftAnyJoin from '@site/static/images/dictionary/dictionary-left-any-join.png';
10+
811
# Dictionary
912

1013
A dictionary in ClickHouse provides an in-memory [key-value](https://en.wikipedia.org/wiki/Key%E2%80%93value_database) representation of data from various [internal and external sources](/sql-reference/dictionaries#dictionary-sources), optimizing for super-low latency lookup queries.
@@ -13,15 +16,18 @@ Dictionaries are useful for:
1316
- Improving the performance of queries, especially when used with `JOIN`s
1417
- Enriching ingested data on the fly without slowing down the ingestion process
1518

16-
![Uses cases for Dictionary in ClickHouse](./images/dictionary-use-cases.png)
19+
<img src={dictionaryUseCases}
20+
class="image"
21+
alt="Use cases for Dictionary in ClickHouse"
22+
style={{width: '100%', background: 'none'}} />
1723

1824
## Speeding up joins using a Dictionary {#speeding-up-joins-using-a-dictionary}
1925

2026
Dictionaries can be used to speed up a specific type of `JOIN`: the [`LEFT ANY` type](/sql-reference/statements/select/join#supported-types-of-join) where the join key needs to match the key attribute of the underlying key-value storage.
2127

22-
<img src={require('./images/dictionary-left-any-join.png').default}
23-
class='image'
24-
alt='Using Dictionary with LEFT ANY JOIN'
28+
<img src={dictionaryLeftAnyJoin}
29+
class="image"
30+
alt="Using Dictionary with LEFT ANY JOIN"
2531
style={{width: '300px', background: 'none'}} />
2632

2733
If this is the case, ClickHouse can exploit the dictionary to perform a [Direct Join](https://clickhouse.com/blog/clickhouse-fully-supports-joins-direct-join-part4#direct-join). This is ClickHouse's fastest join algorithm and is applicable when the underlying [table engine](/engines/table-engines) for the right-hand side table supports low-latency key-value requests. ClickHouse has three table engines providing this: [Join](/engines/table-engines/special/join) (that is basically a pre-calculated hash table), [EmbeddedRocksDB](/engines/table-engines/integrations/embedded-rocksdb) and [Dictionary](/engines/table-engines/special/dictionary). We will describe the dictionary-based approach, but the mechanics are the same for all three engines.
@@ -49,7 +55,7 @@ SELECT
4955
Title,
5056
UpVotes,
5157
DownVotes,
52-
abs(UpVotes - DownVotes) AS Controversial_ratio
58+
abs(UpVotes - DownVotes) AS Controversial_ratio
5359
FROM posts
5460
INNER JOIN
5561
(
@@ -80,7 +86,7 @@ Peak memory usage: 3.18 GiB.
8086

8187
>**Use smaller datasets on the right side of `JOIN`**: This query may seem more verbose than is required, with the filtering on `PostId`s occurring in both the outer and sub queries. This is a performance optimization which ensures the query response time is fast. For optimal performance, always ensure the right side of the `JOIN` is the smaller set and as small as possible. For tips on optimizing JOIN performance and understanding the algorithms available, we recommend [this series of blog articles](https://clickhouse.com/blog/clickhouse-fully-supports-joins-part1).
8288
83-
While this query is fast, it relies on us to write the `JOIN` carefully to achieve good performance. Ideally, we would simply filter the posts to those containing "SQL", before looking at the `UpVote` and `DownVote` counts for the subset of blogs to compute our metric.
89+
While this query is fast, it relies on us to write the `JOIN` carefully to achieve good performance. Ideally, we would simply filter the posts to those containing "SQL", before looking at the `UpVote` and `DownVote` counts for the subset of blogs to compute our metric.
8490

8591
#### Applying a dictionary {#applying-a-dictionary}
8692

@@ -114,7 +120,7 @@ FROM votes
114120
GROUP BY PostId
115121
```
116122

117-
To create our dictionary requires the following DDL - note the use of our above query:
123+
To create our dictionary requires the following DDL - note the use of our above query:
118124

119125
```sql
120126
CREATE DICTIONARY votes_dict
@@ -328,7 +334,7 @@ For database sources such as ClickHouse and Postgres, you can set up a query tha
328334

329335
### Other dictionary types {#other-dictionary-types}
330336

331-
ClickHouse also supports [Hierarchical](/sql-reference/dictionaries#hierarchical-dictionaries), [Polygon](/sql-reference/dictionaries#polygon-dictionaries) and [Regular Expression](/sql-reference/dictionaries#regexp-tree-dictionary) dictionaries.
337+
ClickHouse also supports [Hierarchical](/sql-reference/dictionaries#hierarchical-dictionaries), [Polygon](/sql-reference/dictionaries#polygon-dictionaries) and [Regular Expression](/sql-reference/dictionaries#regexp-tree-dictionary) dictionaries.
332338

333339
### More reading {#more-reading}
334340

-32.3 KB
Binary file not shown.

docs/guides/sre/configuring-ssl.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ sidebar_label: Configuring SSL-TLS
44
sidebar_position: 20
55
---
66
import SelfManaged from '@site/docs/_snippets/_self_managed_only_automated.md';
7+
import configuringSsl01 from '@site/static/images/guides/sre/configuring-ssl_01.png';
78

89
# Configuring SSL-TLS
910

@@ -450,7 +451,8 @@ The typical [4 letter word (4lW)](/guides/sre/keeper/index.md#four-letter-word-c
450451

451452
5. Log into the Play UI using the `https` interface at `https://chnode1.marsnet.local:8443/play`.
452453

453-
![Play UI](images/configuring-ssl_01.png)
454+
<img src={configuringSsl01}
455+
alt="Configuring SSL" />
454456

455457
:::note
456458
the browser will show an untrusted certificate since it is being reached from a workstation and the certificates are not in the root CA stores on the client machine.

0 commit comments

Comments
 (0)