You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/en/cloud/bestpractices/usagelimits.md
+14-10Lines changed: 14 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,22 +4,26 @@ sidebar_label: Usage Limits
4
4
title: Usage Limits
5
5
---
6
6
7
-
8
-
## Database Limits
9
-
Clickhouse is very fast and reliable, but any database has its limits. For example, having too many tables or databases could negatively affect performance. To avoid that, Clickhouse Cloud has guardrails for several types of items.
7
+
While ClickHouse is known for its speed and reliability, optimal performance is achieved within certain operating parameters. For example, having too many tables, databases or parts could negatively impact performance. To avoid this, Clickhouse Cloud has guardrails set up for several types of items. You can find details of these guardrails below.
10
8
11
9
:::tip
12
-
If you've reached one of those limits, it may mean that you are implementing your use case in an unoptimized way. You can contact our support so we can help you refine your use case to avoid going through the limits or to increase the limits in a guided way.
10
+
If you've run up against one of these guardrails, it's possible that you are implementing your use case in an unoptimized way. Contact our support team and we will gladly help you refine your use case to avoid exceeding the guardrails or look together at how we can increase them in a controlled manner.
13
11
:::
14
12
15
-
# Partitions
16
-
Clickhouse Cloud have a limit of **50000**[partitions](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key) per instance
17
-
18
-
# Parts
19
-
Clickhouse Cloud have a limit of **100000**[parts](https://clickhouse.com/docs/en/operations/system-tables/parts) per instance
13
+
-**Databases**: 1000
14
+
-**Tables**: 5000-10k
15
+
-**Columns**: ∼1000 (wide format is preferred to compact)
16
+
-**Partitions**: 50k
17
+
-**Parts**: 100k across the entire instance
18
+
-**Part size**: 150gb
19
+
-**Services**: 20 (soft)
20
+
-**Low cardinality**: 10k or less
21
+
-**Primary keys in a table**: 4-5 that sufficiently filter down the data
22
+
-**Concurrency**: default 100, can be increased to 1000 per node
23
+
-**Batch ingest**: anything > 1M will be split by the system in 1M row blocks
20
24
21
25
:::note
22
-
For Single Replica Services, the maximum number of Databases is restricted to 100, and the maximum number of Tables is restricted to 500. In addition, Storage for Basic Tier Services is limited to 1 TB.
26
+
For Single Replica Services, the maximum number of databases is restricted to 100, and the maximum number of tables is restricted to 500. In addition, storage for Basic Tier Services is limited to 1 TB.
Copy file name to clipboardExpand all lines: docs/en/cloud/get-started/query-endpoints.md
-4Lines changed: 0 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,12 +5,8 @@ description: Easily spin up REST API endpoints from your saved queries
5
5
keywords: [api, query api endpoints, query endpoints, query rest api]
6
6
---
7
7
8
-
import BetaBadge from '@theme/badges/BetaBadge';
9
-
10
8
# Query API Endpoints
11
9
12
-
<BetaBadge />
13
-
14
10
The **Query API Endpoints** feature allows you to create an API endpoint directly from any saved SQL query in the ClickHouse Cloud console. You'll be able to access API endpoints via HTTP to execute your saved queries without needing to connect to your ClickHouse Cloud service via a native driver.
|[Overview](/docs/en/optimize)| Provides an overview and suggested reading before working through this section of the docs. |
17
-
|[Query Optimization Guide](/docs/en/optimize/query-optimization)| A good place to start for query optimization, this simple guide describes common scenarios of how to use different performance and optimization techniques to improve query performance. |
18
-
|[Analyzer](/docs/en/operations/analyzer)| Looks at the ClickHouse Analyzer, a tool for analyzing and optimizing queries. Discusses how the Analyzer works, its benefits (e.g., identifying performance bottlenecks), and how to use it to improve your ClickHouse queries' efficiency. |
19
-
|[Asynchronous Inserts](/docs/en/optimize/asynchronous-inserts)| Focuses on ClickHouse's asynchronous inserts feature. It likely explains how asynchronous inserts work (batching data on the server for efficient insertion) and their benefits (improved performance by offloading insert processing). It might also cover enabling asynchronous inserts and considerations for using them effectively in your ClickHouse environment. |
20
-
|[Avoid Mutations](/docs/en/optimize/avoid-mutations)| Discusses the importance of avoiding mutations (updates and deletes) in ClickHouse. It recommends using append-only inserts for optimal performance and suggests alternative approaches for handling data changes. |
21
-
|[Avoid Nullable Columns](/docs/en/optimize/avoid-nullable-columns)| Discusses why you may want to avoid Nullable columns to save space and increase performance. Demonstrates how to set a default value for a column. |
22
-
|[Avoid Optimize Final](/docs/en/optimize/avoidoptimizefinal)| Explains how the `OPTIMIZE TABLE ... FINAL` query is resource-intensive and suggests alternative approaches to optimize ClickHouse performance. |
23
-
|[Bulk Inserts](/docs/en/optimize/bulk-inserts)| Explains the benefits of using bulk inserts in ClickHouse. |
24
-
|[Partitioning Key](/docs/en/optimize/partitioning-key)| Delves into ClickHouse partition key optimization. Explains how choosing the right partition key can significantly improve query performance by allowing ClickHouse to quickly locate relevant data segments. Covers best practices for selecting efficient partition keys and potential pitfalls to avoid. |
25
-
|[Data Skipping Indexes](/docs/en/optimize/skipping-indexes)| Explains data skipping indexes as a way to optimize performance. |
26
-
|[Sparse Primary Indexes](/docs/en/optimize/sparse-primary-indexes)| Discusses sparse primary indexes in ClickHouse which are used to significantly accelerate query execution. |
27
-
|[Query Profiling](/docs/en/operations/optimizing-performance/sampling-query-profiler)| Explains ClickHouse's Sampling Query Profiler, a tool that helps analyze query execution. |
28
-
|[Testing Hardware](/docs/en/operations/performance-test)| How to run a basic ClickHouse performance test on any server without installation of ClickHouse packages. (Not applicable to ClickHouse Cloud) |
29
-
|[Query Cache](/docs/en/operations/query-cache)| Details ClickHouse's Query Cache, a feature that aims to improve performance by caching the results of frequently executed `SELECT` queries. |
|[Query Optimization Guide](/docs/en/optimize/query-optimization)| A good place to start for query optimization, this simple guide describes common scenarios of how to use different performance and optimization techniques to improve query performance. |
17
+
|[Partitioning Key](/docs/en/optimize/partitioning-key)| Delves into ClickHouse partition key optimization. Explains how choosing the right partition key can significantly improve query performance by allowing ClickHouse to quickly locate relevant data segments. Covers best practices for selecting efficient partition keys and potential pitfalls to avoid. |
18
+
|[Data Skipping Indexes](/docs/en/optimize/skipping-indexes)| Explains data skipping indexes as a way to optimize performance. |
19
+
|[Bulk Inserts](/docs/en/optimize/bulk-inserts)| Explains the benefits of using bulk inserts in ClickHouse. |
20
+
|[Asynchronous Inserts](/docs/en/optimize/asynchronous-inserts)| Focuses on ClickHouse's asynchronous inserts feature. It likely explains how asynchronous inserts work (batching data on the server for efficient insertion) and their benefits (improved performance by offloading insert processing). It might also cover enabling asynchronous inserts and considerations for using them effectively in your ClickHouse environment. |
21
+
|[Avoid Mutations](/docs/en/optimize/avoid-mutations)| Discusses the importance of avoiding mutations (updates and deletes) in ClickHouse. It recommends using append-only inserts for optimal performance and suggests alternative approaches for handling data changes. |
22
+
|[Avoid Nullable Columns](/docs/en/optimize/avoid-nullable-columns)| Discusses why you may want to avoid Nullable columns to save space and increase performance. Demonstrates how to set a default value for a column. |
23
+
|[Avoid Optimize Final](/docs/en/optimize/avoidoptimizefinal)| Explains how the `OPTIMIZE TABLE ... FINAL` query is resource-intensive and suggests alternative approaches to optimize ClickHouse performance. |
24
+
|[Analyzer](/docs/en/operations/analyzer)| Looks at the ClickHouse Analyzer, a tool for analyzing and optimizing queries. Discusses how the Analyzer works, its benefits (e.g., identifying performance bottlenecks), and how to use it to improve your ClickHouse queries' efficiency. |
25
+
|[Query Profiling](/docs/en/operations/optimizing-performance/sampling-query-profiler)| Explains ClickHouse's Sampling Query Profiler, a tool that helps analyze query execution. |
26
+
|[Query Cache](/docs/en/operations/query-cache)| Details ClickHouse's Query Cache, a feature that aims to improve performance by caching the results of frequently executed `SELECT` queries. |
27
+
|[Testing Hardware](/docs/en/operations/performance-test)| How to run a basic ClickHouse performance test on any server without installation of ClickHouse packages. (Not applicable to ClickHouse Cloud) |
0 commit comments