Skip to content

Commit 4455699

Browse files
authored
Merge branch 'main' into separate-out-new-build
2 parents 6fda746 + 4c51dec commit 4455699

File tree

43 files changed

+2254
-124
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+2254
-124
lines changed

.github/workflows/build-search.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,10 @@ jobs:
2121

2222
steps:
2323
- name: Checkout Repository
24-
uses: actions/checkout@v3
24+
uses: actions/checkout@v4
2525

2626
- name: Set up Node.js
27-
uses: actions/setup-node@v3
27+
uses: actions/setup-node@v4
2828
with:
2929
node-version: '20'
3030

.github/workflows/check-build.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ jobs:
1818
steps:
1919
# Step 1: Check out the repository
2020
- name: Check out repository
21-
uses: actions/checkout@v3
21+
uses: actions/checkout@v4
2222

2323
# Step 2: Set up environment if required (e.g., installing Aspell)
2424
- name: Install Aspell
@@ -33,7 +33,7 @@ jobs:
3333

3434
# Step 4: Setup Python and dependencies for KB checker
3535
- name: Set up Python
36-
uses: actions/setup-python@v3
36+
uses: actions/setup-python@v5
3737
with:
3838
python-version: '3.x'
3939

.github/workflows/table_of_contents.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,15 +23,15 @@ jobs:
2323

2424
# Step 1 - Check out the repository
2525
- name: Check out repository
26-
uses: actions/checkout@v3
26+
uses: actions/checkout@v4
2727

2828
# Step 2 - Pull changes
2929
- name: Pull remote Changes
3030
run: git pull
3131

3232
# Step 3 - Setup python
3333
- name: Set up python
34-
uses: actions/setup-python@v3
34+
uses: actions/setup-python@v5
3535
with:
3636
python-version: '3.x'
3737

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ clickhouse-docs.code-workspace
2828
yarn.lock
2929
yarn.error-log
3030
.comments
31+
yarn-error.log
3132

3233
# Output files used by scripts to verify links
3334
sidebar_links.txt

docs/en/cloud/bestpractices/usagelimits.md

Lines changed: 14 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,22 +4,26 @@ sidebar_label: Usage Limits
44
title: Usage Limits
55
---
66

7-
8-
## Database Limits
9-
Clickhouse is very fast and reliable, but any database has its limits. For example, having too many tables or databases could negatively affect performance. To avoid that, Clickhouse Cloud has guardrails for several types of items.
7+
While ClickHouse is known for its speed and reliability, optimal performance is achieved within certain operating parameters. For example, having too many tables, databases or parts could negatively impact performance. To avoid this, Clickhouse Cloud has guardrails set up for several types of items. You can find details of these guardrails below.
108

119
:::tip
12-
If you've reached one of those limits, it may mean that you are implementing your use case in an unoptimized way. You can contact our support so we can help you refine your use case to avoid going through the limits or to increase the limits in a guided way.
10+
If you've run up against one of these guardrails, it's possible that you are implementing your use case in an unoptimized way. Contact our support team and we will gladly help you refine your use case to avoid exceeding the guardrails or look together at how we can increase them in a controlled manner.
1311
:::
1412

15-
# Partitions
16-
Clickhouse Cloud have a limit of **50000** [partitions](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key) per instance
17-
18-
# Parts
19-
Clickhouse Cloud have a limit of **100000** [parts](https://clickhouse.com/docs/en/operations/system-tables/parts) per instance
13+
- **Databases**: 1000
14+
- **Tables**: 5000-10k
15+
- **Columns**: ∼1000 (wide format is preferred to compact)
16+
- **Partitions**: 50k
17+
- **Parts**: 100k across the entire instance
18+
- **Part size**: 150gb
19+
- **Services**: 20 (soft)
20+
- **Low cardinality**: 10k or less
21+
- **Primary keys in a table**: 4-5 that sufficiently filter down the data
22+
- **Concurrency**: default 100, can be increased to 1000 per node
23+
- **Batch ingest**: anything > 1M will be split by the system in 1M row blocks
2024

2125
:::note
22-
For Single Replica Services, the maximum number of Databases is restricted to 100, and the maximum number of Tables is restricted to 500. In addition, Storage for Basic Tier Services is limited to 1 TB.
26+
For Single Replica Services, the maximum number of databases is restricted to 100, and the maximum number of tables is restricted to 500. In addition, storage for Basic Tier Services is limited to 1 TB.
2327
:::
2428

2529

docs/en/cloud/get-started/query-endpoints.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,8 @@ description: Easily spin up REST API endpoints from your saved queries
55
keywords: [api, query api endpoints, query endpoints, query rest api]
66
---
77

8-
import BetaBadge from '@theme/badges/BetaBadge';
9-
108
# Query API Endpoints
119

12-
<BetaBadge />
13-
1410
The **Query API Endpoints** feature allows you to create an API endpoint directly from any saved SQL query in the ClickHouse Cloud console. You'll be able to access API endpoints via HTTP to execute your saved queries without needing to connect to your ClickHouse Cloud service via a native driver.
1511

1612
## Quick-start Guide

docs/en/guides/best-practices/index.md

Lines changed: 15 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -9,21 +9,19 @@ description: Overview page of Performance and Optimizations
99
This section contains tips and best practices for improving performance with ClickHouse.
1010
We recommend users read [Core Concepts](/docs/en/parts) as a precursor to this section,
1111
which covers the main concepts required to improve performance,
12-
especially [Primary Indices](/docs/en/optimize/sparse-primary-indexes).
12+
especially [Primary Indices](./sparse-primary-indexes.md).
1313

14-
| Topic | Description |
15-
|------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
16-
| [Overview](/docs/en/optimize) | Provides an overview and suggested reading before working through this section of the docs. |
17-
| [Query Optimization Guide](/docs/en/optimize/query-optimization) | A good place to start for query optimization, this simple guide describes common scenarios of how to use different performance and optimization techniques to improve query performance. |
18-
| [Analyzer](/docs/en/operations/analyzer) | Looks at the ClickHouse Analyzer, a tool for analyzing and optimizing queries. Discusses how the Analyzer works, its benefits (e.g., identifying performance bottlenecks), and how to use it to improve your ClickHouse queries' efficiency. |
19-
| [Asynchronous Inserts](/docs/en/optimize/asynchronous-inserts) | Focuses on ClickHouse's asynchronous inserts feature. It likely explains how asynchronous inserts work (batching data on the server for efficient insertion) and their benefits (improved performance by offloading insert processing). It might also cover enabling asynchronous inserts and considerations for using them effectively in your ClickHouse environment. |
20-
| [Avoid Mutations](/docs/en/optimize/avoid-mutations) | Discusses the importance of avoiding mutations (updates and deletes) in ClickHouse. It recommends using append-only inserts for optimal performance and suggests alternative approaches for handling data changes. |
21-
| [Avoid Nullable Columns](/docs/en/optimize/avoid-nullable-columns)| Discusses why you may want to avoid Nullable columns to save space and increase performance. Demonstrates how to set a default value for a column. |
22-
| [Avoid Optimize Final](/docs/en/optimize/avoidoptimizefinal) | Explains how the `OPTIMIZE TABLE ... FINAL` query is resource-intensive and suggests alternative approaches to optimize ClickHouse performance. |
23-
| [Bulk Inserts](/docs/en/optimize/bulk-inserts) | Explains the benefits of using bulk inserts in ClickHouse. |
24-
| [Partitioning Key](/docs/en/optimize/partitioning-key) | Delves into ClickHouse partition key optimization. Explains how choosing the right partition key can significantly improve query performance by allowing ClickHouse to quickly locate relevant data segments. Covers best practices for selecting efficient partition keys and potential pitfalls to avoid. |
25-
| [Data Skipping Indexes](/docs/en/optimize/skipping-indexes) | Explains data skipping indexes as a way to optimize performance. |
26-
| [Sparse Primary Indexes](/docs/en/optimize/sparse-primary-indexes)| Discusses sparse primary indexes in ClickHouse which are used to significantly accelerate query execution. |
27-
| [Query Profiling](/docs/en/operations/optimizing-performance/sampling-query-profiler) | Explains ClickHouse's Sampling Query Profiler, a tool that helps analyze query execution. |
28-
| [Testing Hardware](/docs/en/operations/performance-test) | How to run a basic ClickHouse performance test on any server without installation of ClickHouse packages. (Not applicable to ClickHouse Cloud) |
29-
| [Query Cache](/docs/en/operations/query-cache) | Details ClickHouse's Query Cache, a feature that aims to improve performance by caching the results of frequently executed `SELECT` queries. |
14+
| Topic | Description |
15+
|---------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
16+
| [Query Optimization Guide](/docs/en/optimize/query-optimization) | A good place to start for query optimization, this simple guide describes common scenarios of how to use different performance and optimization techniques to improve query performance. |
17+
| [Partitioning Key](/docs/en/optimize/partitioning-key) | Delves into ClickHouse partition key optimization. Explains how choosing the right partition key can significantly improve query performance by allowing ClickHouse to quickly locate relevant data segments. Covers best practices for selecting efficient partition keys and potential pitfalls to avoid. |
18+
| [Data Skipping Indexes](/docs/en/optimize/skipping-indexes) | Explains data skipping indexes as a way to optimize performance. |
19+
| [Bulk Inserts](/docs/en/optimize/bulk-inserts) | Explains the benefits of using bulk inserts in ClickHouse. |
20+
| [Asynchronous Inserts](/docs/en/optimize/asynchronous-inserts) | Focuses on ClickHouse's asynchronous inserts feature. It likely explains how asynchronous inserts work (batching data on the server for efficient insertion) and their benefits (improved performance by offloading insert processing). It might also cover enabling asynchronous inserts and considerations for using them effectively in your ClickHouse environment. |
21+
| [Avoid Mutations](/docs/en/optimize/avoid-mutations) | Discusses the importance of avoiding mutations (updates and deletes) in ClickHouse. It recommends using append-only inserts for optimal performance and suggests alternative approaches for handling data changes. |
22+
| [Avoid Nullable Columns](/docs/en/optimize/avoid-nullable-columns) | Discusses why you may want to avoid Nullable columns to save space and increase performance. Demonstrates how to set a default value for a column. |
23+
| [Avoid Optimize Final](/docs/en/optimize/avoidoptimizefinal) | Explains how the `OPTIMIZE TABLE ... FINAL` query is resource-intensive and suggests alternative approaches to optimize ClickHouse performance. |
24+
| [Analyzer](/docs/en/operations/analyzer) | Looks at the ClickHouse Analyzer, a tool for analyzing and optimizing queries. Discusses how the Analyzer works, its benefits (e.g., identifying performance bottlenecks), and how to use it to improve your ClickHouse queries' efficiency. |
25+
| [Query Profiling](/docs/en/operations/optimizing-performance/sampling-query-profiler) | Explains ClickHouse's Sampling Query Profiler, a tool that helps analyze query execution. |
26+
| [Query Cache](/docs/en/operations/query-cache) | Details ClickHouse's Query Cache, a feature that aims to improve performance by caching the results of frequently executed `SELECT` queries. |
27+
| [Testing Hardware](/docs/en/operations/performance-test) | How to run a basic ClickHouse performance test on any server without installation of ClickHouse packages. (Not applicable to ClickHouse Cloud) |

0 commit comments

Comments
 (0)