You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/cloud/bestpractices/usagelimits.md
+14-16Lines changed: 14 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,28 +4,26 @@ sidebar_label: Usage Limits
4
4
title: Usage Limits
5
5
---
6
6
7
-
8
-
## Database Limits
9
-
Clickhouse is very fast and reliable, but any database has its limits. For example, having too many tables or databases could negatively affect performance. To avoid that, Clickhouse Cloud has guardrails for several types of items.
7
+
While ClickHouse is known for its speed and reliability, optimal performance is achieved within certain operating parameters. For example, having too many tables, databases or parts could negatively impact performance. To avoid this, Clickhouse Cloud has guardrails set up for several types of items. You can find details of these guardrails below.
10
8
11
9
:::tip
12
-
If you've reached one of those limits, it may mean that you are implementing your use case in an unoptimized way. You can contact our support so we can help you refine your use case to avoid going through the limits or to increase the limits in a guided way.
10
+
If you've run up against one of these guardrails, it's possible that you are implementing your use case in an unoptimized way. Contact our support team and we will gladly help you refine your use case to avoid exceeding the guardrails or look together at how we can increase them in a controlled manner.
13
11
:::
14
12
15
-
# Tables
16
-
Clickhouse Cloud have a limit of **5000** tables per instance
17
-
18
-
# Databases
19
-
Clickhouse Cloud have a limit of **1000** databases per instance
20
-
21
-
# Partitions
22
-
Clickhouse Cloud have a limit of **50000**[partitions](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key) per instance
23
-
24
-
# Parts
25
-
Clickhouse Cloud have a limit of **100000**[parts](https://clickhouse.com/docs/en/operations/system-tables/parts) per instance
13
+
-**Databases**: 1000
14
+
-**Tables**: 5000-10k
15
+
-**Columns**: ∼1000 (wide format is preferred to compact)
16
+
-**Partitions**: 50k
17
+
-**Parts**: 100k across the entire instance
18
+
-**Part size**: 150gb
19
+
-**Services**: 20 (soft)
20
+
-**Low cardinality**: 10k or less
21
+
-**Primary keys in a table**: 4-5 that sufficiently filter down the data
22
+
-**Concurrency**: default 100, can be increased to 1000 per node
23
+
-**Batch ingest**: anything > 1M will be split by the system in 1M row blocks
26
24
27
25
:::note
28
-
For Single Replica Services, the maximum number of Databases is restricted to 100, and the maximum number of Tables is restricted to 500. In addition, Storage for Basic Tier Services is limited to 1 TB.
26
+
For Single Replica Services, the maximum number of databases is restricted to 100, and the maximum number of tables is restricted to 500. In addition, storage for Basic Tier Services is limited to 1 TB.
Copy file name to clipboardExpand all lines: docs/en/cloud/get-started/query-endpoints.md
-4Lines changed: 0 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,12 +5,8 @@ description: Easily spin up REST API endpoints from your saved queries
5
5
keywords: [api, query api endpoints, query endpoints, query rest api]
6
6
---
7
7
8
-
import BetaBadge from '@theme/badges/BetaBadge';
9
-
10
8
# Query API Endpoints
11
9
12
-
<BetaBadge />
13
-
14
10
The **Query API Endpoints** feature allows you to create an API endpoint directly from any saved SQL query in the ClickHouse Cloud console. You'll be able to access API endpoints via HTTP to execute your saved queries without needing to connect to your ClickHouse Cloud service via a native driver.
Copy file name to clipboardExpand all lines: docs/en/cloud/manage/jan2025_faq/dimensions.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,13 +49,13 @@ It consists of two dimensions:
49
49
50
50
### How does it look in an illustrative example?
51
51
52
-
For example, ingesting 1 TB of data over 24 hours using the Kafka connector using a single replica (0.5 compute unit) will cost:
52
+
For example, ingesting 1 TB of data over 24 hours using the Kafka connector using a single replica (0.25 compute unit) will cost:
53
53
54
-
`0.5 x 0.20 x 24 + 0.04 x 1000 = $42.4`
54
+
`0.25 x 0.20 x 24 + 0.04 x 1000 = $41.2`
55
55
56
56
For object storage connectors (S3 and GCS), only the ClickPipes compute cost is incurred since the ClickPipes pod is not processing data but only orchestrating the transfer, which is operated by the underlying ClickHouse service:
Copy file name to clipboardExpand all lines: docs/en/cloud/manage/scaling.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -73,7 +73,7 @@ However, these services can be scaled vertically by contacting support.
73
73
74
74
You can use ClickHouse Cloud [public APIs](https://clickhouse.com/docs/en/cloud/manage/api/swagger#/paths/~1v1~1organizations~1:organizationId~1services~1:serviceId~1scaling/patch) to scale your service by updating the scaling settings for the service or adjust the number of replicas from the cloud console.
75
75
76
-
A **Scale**or**Enterprise**ClickHouse service must have a minimum of `2` replicas.
76
+
**Scale**and**Enterprise**tiers do support single-replica services. However, a service in these tiers that starts with multiple replicas, or scales out to multiples replicas can only be scaled back in to a minimum of `2` replicas.
77
77
78
78
:::note
79
79
Services can scale horizontally to a maximum of 20 replicas. If you need additional replicas, please contact our support team.
0 commit comments