You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The data in this system table is held locally on each node in ClickHouse Cloud. Obtaining a complete view of all data, therefore, requires the `clusterAllReplicas` function. See [here](/operations/system-tables#system-tables-in-clickhouse-cloud) for further details.
2
+
The data in this system table is held locally on each node in ClickHouse Cloud. Obtaining a complete view of all data, therefore, requires the `clusterAllReplicas` function. See [here](/operations/system-tables/overview#system-tables-in-clickhouse-cloud) for further details.
Copy file name to clipboardExpand all lines: docs/chdb/guides/querying-parquet.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ But first, let's install `chDB`:
48
48
import chdb
49
49
```
50
50
51
-
When querying Parquet files, we can use the [`ParquetMetadata`](/interfaces/formats#parquetmetadata-data-format-parquet-metadata) input format to have it return Parquet metadata rather than the content of the file.
51
+
When querying Parquet files, we can use the [`ParquetMetadata`](/interfaces/formats/ParquetMetadata) input format to have it return Parquet metadata rather than the content of the file.
52
52
Let's use the `DESCRIBE` clause to see the fields returned when we use this format:
Copy file name to clipboardExpand all lines: docs/cloud/manage/cloud-tiers.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -186,7 +186,7 @@ Caters to large-scale, mission critical deployments that have stringent security
186
186
- Single Sign On (SSO)
187
187
- Enhanced Encryption: For AWS and GCP services. Services are encrypted by our key by default and can be rotated to their key to enable Customer Managed Encryption Keys (CMEK).
188
188
- Allows Scheduled upgrades: Users can select the day of the week/time window for upgrades, both database and cloud releases.
Copy file name to clipboardExpand all lines: docs/cloud/manage/openapi.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ This document covers the ClickHouse Cloud API. For database API endpoints, pleas
29
29
3. To create an API key, specify the key name, permissions for the key, and expiration time, then click `Generate API Key`.
30
30
<br/>
31
31
:::note
32
-
Permissions align with ClickHouse Cloud [predefined roles](/cloud/security/cloud-access-management#predefined-roles). The developer role has read-only permissions and the admin role has full read and write permissions.
32
+
Permissions align with ClickHouse Cloud [predefined roles](/cloud/security/cloud-access-management/overview#predefined-roles). The developer role has read-only permissions and the admin role has full read and write permissions.
Horizontal scaling is now Generally Available. Users can add additional replicas to scale out their service through the APIs and the cloud console. Please refer to the [documentation](/manage/scaling#self-serve-horizontal-scaling) for information.
113
+
Horizontal scaling is now Generally Available. Users can add additional replicas to scale out their service through the APIs and the cloud console. Please refer to the [documentation](/manage/scaling#manual-horizontal-scaling) for information.
114
114
115
115
### Configurable backups {#configurable-backups}
116
116
@@ -446,7 +446,7 @@ The Fast release channel allows your services to receive updates ahead of the re
446
446
447
447
### Terraform support for horizontal scaling {#terraform-support-for-horizontal-scaling}
448
448
449
-
ClickHouse Cloud supports [horizontal scaling](/manage/scaling#vertical-and-horizontal-scaling), or the ability to add additional replicas of the same size to your services. Horizontal scaling improves performance and parallelization to support concurrent queries. Previously, adding more replicas required either using the ClickHouse Cloud console or the API. You can now use Terraform to add or remove replicas from your service, allowing you to programmatically scale your ClickHouse services as needed.
449
+
ClickHouse Cloud supports [horizontal scaling](/manage/scaling#how-scaling-works-in-clickhouse-cloud), or the ability to add additional replicas of the same size to your services. Horizontal scaling improves performance and parallelization to support concurrent queries. Previously, adding more replicas required either using the ClickHouse Cloud console or the API. You can now use Terraform to add or remove replicas from your service, allowing you to programmatically scale your ClickHouse services as needed.
450
450
451
451
Please see the [ClickHouse Terraform provider](https://registry.terraform.io/providers/ClickHouse/clickhouse/latest/docs) for more information.
Copy file name to clipboardExpand all lines: docs/cloud/security/shared-responsibility-model.md
+4-5Lines changed: 4 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,6 @@ title: Security Shared Responsibility Model
8
8
9
9
ClickHouse Cloud offers three service types: Basic, Scale and Enterprise. For more information, review our [Service Types](/cloud/manage/cloud-tiers) page.
10
10
11
-
12
11
## Cloud architecture {#cloud-architecture}
13
12
14
13
The Cloud architecture consists of the control plane and the data plane. The control plane is responsible for organization creation, user management within the control plane, service management, API key management, and billing. The data plane runs tooling for orchestration and management, and houses customer services. For more information, review our [ClickHouse Cloud Architecture](/cloud/reference/architecture) diagram.
@@ -58,9 +57,9 @@ The model below generally addresses ClickHouse responsibilities and shows respon
|[Standard role-based access](/cloud/security/cloud-access-management) in control plane | Available | AWS, GCP, Azure | All |
61
-
|[Multi-factor authentication (MFA)](/cloud/security/cloud-authentication#multi-factor-authhentication) available | Available | AWS, GCP, Azure | All |
60
+
|[Multi-factor authentication (MFA)](/cloud/security/cloud-authentication#multi-factor-authentication) available | Available | AWS, GCP, Azure | All |
62
61
|[SAML Single Sign-On](/cloud/security/saml-setup) to control plane available | Preview | AWS, GCP, Azure | Enterprise |
63
-
| Granular [role-based access control](/cloud/security/cloud-access-management#database-roles) in databases | Available | AWS, GCP, Azure | All |
62
+
| Granular [role-based access control](/cloud/security/cloud-access-management/overview#database-roles) in databases | Available | AWS, GCP, Azure | All |
64
63
65
64
</details>
66
65
<details>
@@ -69,8 +68,8 @@ The model below generally addresses ClickHouse responsibilities and shows respon
|[Cloud provider and region](/cloud/reference/supported-regions) selections | Available | AWS, GCP, Azure | All |
72
-
| Limited [free daily backups](/cloud/manage/backups#default-backup-policy)| Available | AWS, GCP, Azure | All |
73
-
|[Custom backup configurations](/cloud/manage/backups#configurable-backups) available | Available | GCP, AWS, Azure | Scale or Enterprise |
71
+
| Limited [free daily backups](/cloud/manage/backups/overview#default-backup-policy)| Available | AWS, GCP, Azure | All |
72
+
|[Custom backup configurations](/cloud/manage/backups/overview#configurable-backups) available | Available | GCP, AWS, Azure | Scale or Enterprise |
74
73
|[Customer managed encryption keys (CMEK)](/cloud/security/cmek) for transparent<br/> data encryption available | Available | AWS | Scale or Enterprise |
75
74
|[Field level encryption](/sql-reference/functions/encryption-functions) with manual key management for granular encryption | Available | GCP, AWS, Azure | All |
Learn more about the [column compression codecs](/sql-reference/statements/create/table.md/#column-compression-codecs) available and specify them when creating your tables, or afterward.
17
+
Learn more about the [column compression codecs](/sql-reference/statements/create/table#column_compression_codec) available and specify them when creating your tables, or afterward.
Copy file name to clipboardExpand all lines: docs/faq/operations/delete-old-data.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,13 +39,13 @@ ALTER DELETE removes rows using asynchronous batch operations. Unlike DELETE FRO
39
39
40
40
This is the most common approach to make your system based on ClickHouse [GDPR](https://gdpr-info.eu)-compliant.
41
41
42
-
More details on [mutations](../../sql-reference/statements/alter/index.md#alter-mutations).
42
+
More details on [mutations](/sql-reference/statements/alter#mutations).
43
43
44
44
## DROP PARTITION {#drop-partition}
45
45
46
46
`ALTER TABLE ... DROP PARTITION` provides a cost-efficient way to drop a whole partition. It’s not that flexible and needs proper partitioning scheme configured on table creation, but still covers most common cases. Like mutations need to be executed from an external system for regular use.
47
47
48
-
More details on [manipulating partitions](../../sql-reference/statements/alter/partition.md#alter_drop-partition).
48
+
More details on [manipulating partitions](/sql-reference/statements/alter/partition).
Copy file name to clipboardExpand all lines: docs/faq/use-cases/time-series.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ ClickHouse is a generic data storage solution for [OLAP](../../faq/general/olap.
13
13
14
14
First of all, there are **[specialized codecs](../../sql-reference/statements/create/table.md#specialized-codecs)** which make typical time-series. Either common algorithms like `DoubleDelta` and `Gorilla` or specific to ClickHouse like `T64`.
15
15
16
-
Second, time-series queries often hit only recent data, like one day or one week old. It makes sense to use servers that have both fast NVMe/SSD drives and high-capacity HDD drives. ClickHouse [TTL](/engines/table-engines/mergetree-family/mergetree.md/##table_engine-mergetree-multiple-volumes) feature allows to configure keeping fresh hot data on fast drives and gradually move it to slower drives as it ages. Rollup or removal of even older data is also possible if your requirements demand it.
16
+
Second, time-series queries often hit only recent data, like one day or one week old. It makes sense to use servers that have both fast NVMe/SSD drives and high-capacity HDD drives. ClickHouse [TTL](/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl) feature allows to configure keeping fresh hot data on fast drives and gradually move it to slower drives as it ages. Rollup or removal of even older data is also possible if your requirements demand it.
17
17
18
18
Even though it’s against ClickHouse philosophy of storing and processing raw data, you can use [materialized views](../../sql-reference/statements/create/view.md) to fit into even tighter latency or costs requirements.
0 commit comments