Skip to content

Commit 7b879cb

Browse files
committed
fix yarn.lock attempt 2
2 parents 0f0b0f5 + 609a6ad commit 7b879cb

File tree

39 files changed

+882
-8658
lines changed

39 files changed

+882
-8658
lines changed

.github/workflows/build-search.yml

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,7 @@ env:
1515

1616
jobs:
1717
update-search:
18-
if: github.event.pull_request.merged == true && contains(github.event.pull_request.labels.*.name, 'update search') && github.event.pull_request.base.ref == 'main'
19-
#if: contains(github.event.pull_request.labels.*.name, 'update search') # Updated to trigger directly on PRs with the label
18+
if: github.event_name == 'workflow_dispatch' || github.event_name == 'schedule' || (github.event_name == 'pull_request' && github.event.pull_request.merged == true && contains(github.event.pull_request.labels.*.name, 'update search') && github.event.pull_request.base.ref == 'main')
2019
runs-on: ubuntu-latest
2120

2221
steps:

docs/en/cloud/manage/backups/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Your service will be backed up based on the set schedule, whether it is the defa
3737

3838
## Understanding backup cost
3939

40-
ClickHouse Cloud includes two backups for free, but choosing a schedule that requires retaining more data, or causes more frequent backups can cause additional storage charges for backups. If you do not change the default settings, you will not incur any backup cost.
40+
Per the default policy, ClickHouse Cloud mandates a backup every day, with a 24 hour retention. Choosing a schedule that requires retaining more data, or causes more frequent backups can cause additional storage charges for backups.
4141

4242
To understand the backup cost, you can view the backup cost per service from the usage screen (as shown below). Once you have backups running for a few days with a customized schedule, you can get an idea of the cost and extrapolate to get the monthly cost for backups.
4343

docs/en/cloud/reference/byoc.md

Lines changed: 68 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,16 @@
11
---
2-
title: BYOC (Bring Your Own Cloud) for AWS - Beta
2+
title: BYOC (Bring Your Own Cloud) for AWS
33
slug: /en/cloud/reference/byoc
44
sidebar_label: BYOC (Bring Your Own Cloud)
55
keywords: [byoc, cloud, bring your own cloud]
66
description: Deploy ClickHouse on your own cloud infrastructure
77
---
8-
import BetaBadge from '@theme/badges/BetaBadge';
98

109
## Overview
1110

12-
<BetaBadge />
13-
1411
BYOC (Bring Your Own Cloud) allows you to deploy ClickHouse Cloud on your own cloud infrastructure. This is useful if you have specific requirements or constraints that prevent you from using the ClickHouse Cloud managed service.
1512

16-
**BYOC is currently in Beta. If you would like access, please contact [support](https://clickhouse.com/support/program).** Refer to our [Terms of Service](https://clickhouse.com/legal/agreements/terms-of-service) for additional information.
13+
**If you would like access, please contact [support](https://clickhouse.com/support/program).** Refer to our [Terms of Service](https://clickhouse.com/legal/agreements/terms-of-service) for additional information.
1714

1815
BYOC is currently only supported for AWS, with GCP and Microsoft Azure in development.
1916

@@ -43,7 +40,7 @@ Metrics and logs are stored within the customer's BYOC VPC. Logs are currently s
4340

4441
## Onboarding Process
4542

46-
During the Beta, initiate the onboarding process by reaching out to ClickHouse [support](https://clickhouse.com/support/program). Customers need to have a dedicated AWS account and know the region they will use. At this time, we are allowing users to launch BYOC services only in the regions that we support for ClickHouse Cloud.
43+
Customers can initiate the onboarding process by reaching out to ClickHouse [support](https://clickhouse.com/support/program). Customers need to have a dedicated AWS account and know the region they will use. At this time, we are allowing users to launch BYOC services only in the regions that we support for ClickHouse Cloud.
4744

4845
### Prepare a Dedicated AWS Account
4946

@@ -65,44 +62,88 @@ After creating the CloudFormation stack, you will be prompted to set up the infr
6562

6663
### Optional: Setup VPC Peering
6764

68-
To create or delete VPC peering for ClickHouse BYOC, submit a ticket with the following details:
65+
To create or delete VPC peering for ClickHouse BYOC, follow the steps:
6966

70-
- ClickHouse BYOC name for the VPC peering request.
71-
- VPC ID (`vpc-xxxxxx`) to peer with the BYOC VPC.
72-
- CIDR range of the VPC.
73-
- AWS account owning the peering VPC.
74-
- AWS region of the VPC.
67+
#### Step 1 Create a peering connection
68+
1. Navigate to the VPC Dashboard in ClickHouse BYOC account.
69+
2. Select Peering Connections.
70+
3. Click Create Peering Connection
71+
4. Set the VPC Requester to the ClickHouse VPC ID.
72+
5. Set the VPC Acceptor to the target VPC ID. (Select another account if applicable)
73+
6. Click Create Peering Connection.
7574

76-
Once the support ticket is received and processed, you will need to complete a few steps in your AWS account to finalize the peering setup:
75+
<br />
7776

78-
1. Accept the VPC peering request in the AWS account of the peered VPC.
79-
- Navigate to **VPC -> Peering connections -> Actions -> Accept request**.
77+
<img src={require('./images/byoc-vpcpeering-1.png').default}
78+
alt='BYOC Create Peering Connection'
79+
class='image'
80+
style={{width: '800px'}}
81+
/>
82+
83+
<br />
8084

81-
2. Adjust the route table for the peered VPC:
82-
- Locate the subnet in the peered VPC that needs to connect to the ClickHouse instance.
83-
- Edit the subnet's route table and add a route with the following configuration:
84-
- **Destination**: ClickHouse BYOC VPC CIDR (e.g., `10.0.0.0/16`)
85-
- **Target**: Peering Connection (`pcx-12345678`, the actual ID will appear in the dropdown list)
85+
#### Step 2 Accept the peering connection request
86+
Go to the peering account, in the (VPC -> Peering connections -> Actions -> Accept request) page customer can approve this VPC peering request.
8687

8788
<br />
8889

89-
<img src={require('./images/byoc-2.png').default}
90-
alt='BYOC network configuration'
90+
<img src={require('./images/byoc-vpcpeering-2.png').default}
91+
alt='BYOC Accept Peering Connection'
9192
class='image'
92-
style={{width: '600px'}}
93+
style={{width: '800px'}}
9394
/>
9495

9596
<br />
9697

97-
3. Check existing security groups and ensure no rules block access to the BYOC VPC.
98+
#### Step 3 Add destination to ClickHouse VPC route tables
99+
In ClickHouse BYOC account,
100+
1. Select Route Tables in the VPC Dashboard.
101+
2. Search for the ClickHouse VPC ID. Edit each route table attached to the private subnets.
102+
3. Click the Edit button under the Routes tab.
103+
4. Click Add another route.
104+
5. Enter the CIDR range of the target VPC for the Destination.
105+
6. Select “Peering Connection” and the ID of the peering connection for the Target.
106+
107+
<br />
108+
109+
<img src={require('./images/byoc-vpcpeering-3.png').default}
110+
alt='BYOC Add route table'
111+
class='image'
112+
style={{width: '800px'}}
113+
/>
98114

115+
<br />
116+
117+
#### Step 4 Add destination to the target VPC route tables
118+
In the peering AWS account,
119+
1. Select Route Tables in the VPC Dashboard.
120+
2. Search for the target VPC ID.
121+
3. Click the Edit button under the Routes tab.
122+
4. Click Add another route.
123+
5. Enter the CIDR range of the ClickHouse VPC for the Destination.
124+
6. Select “Peering Connection” and the ID of the peering connection for the Target.
125+
126+
<br />
127+
128+
<img src={require('./images/byoc-vpcpeering-4.png').default}
129+
alt='BYOC Add route table'
130+
class='image'
131+
style={{width: '800px'}}
132+
/>
133+
134+
<br />
135+
136+
#### Step 5 Enable Private Load Balancer for ClickHouse BYOC
137+
Contact ClickHouse support to enable Private Load Balancer.
138+
139+
---
99140
The ClickHouse service should now be accessible from the peered VPC.
100141

101142
To access ClickHouse privately, a private load balancer and endpoint are provisioned for secure connectivity from the user's peered VPC. The private endpoint follows the public endpoint format with a `-private` suffix. For example:
102143
- **Public endpoint**: `h5ju65kv87.mhp0y4dmph.us-west-2.aws.byoc.clickhouse.cloud`
103144
- **Private endpoint**: `h5ju65kv87-private.mhp0y4dmph.us-west-2.aws.byoc.clickhouse.cloud`
104145

105-
4. (Optional) After verifying that peering is working, you can request the removal of the public load balancer for ClickHouse BYOC.
146+
Optional, after verifying that peering is working, you can request the removal of the public load balancer for ClickHouse BYOC.
106147

107148
## Upgrade Process
108149

@@ -202,7 +243,8 @@ State Exporter sends ClickHouse service state information to an SQS owned by Cli
202243
- Supports operations such as start, stop, and terminate.
203244
- View services and status.
204245
- **Backup and restore.**
205-
- **Manual vertical and horizontal scaling.**
246+
- **Manual vertical and horizontal scaling.**
247+
- **Idling.**
206248
- **Runtime security monitoring and alerting via Falco (`falco-metrics`).**
207249
- **Zero Trust Network via Tailscale.**
208250
- **Monitoring**:
@@ -218,7 +260,6 @@ State Exporter sends ClickHouse service state information to an SQS owned by Cli
218260
- [AWS KMS](https://aws.amazon.com/kms/) aka CMEK (customer-managed encryption keys)
219261
- ClickPipes for ingest
220262
- Autoscaling
221-
- Idling
222263
- MySQL interface
223264

224265
## FAQ
-59.8 KB
Binary file not shown.
183 KB
Loading
50.3 KB
Loading
23.2 KB
Loading
18.1 KB
Loading

docs/en/concepts/why-clickhouse-is-so-fast.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,19 +13,19 @@ From an architectural perspective, databases consist (at least) of a storage lay
1313

1414
## Storage Layer: Concurrent inserts are isolated from each other
1515

16-
In ClickHouse, each table consists of multiple "table parts". A part is created whenever a user inserts data into the table (INSERT statement). A query is always executed against all table parts that exist at the time the query starts.
16+
In ClickHouse, each table consists of multiple "table parts". A [part](/docs/en/parts) is created whenever a user inserts data into the table (INSERT statement). A query is always executed against all table parts that exist at the time the query starts.
1717

18-
To avoid that too many parts accumulate, ClickHouse runs a merge operation in the background which continuously combines multiple (small) parts into a single bigger part.
18+
To avoid that too many parts accumulate, ClickHouse runs a [merge](/docs/en/merges) operation in the background which continuously combines multiple smaller parts into a single bigger part.
1919

20-
This approach has several advantages: On the one hand, individual inserts are "local" in the sense that they do not need to update global, i.e. per-table data structures. As a result, multiple simultaneous inserts need no mutual synchronization or synchronization with existing table data, and thus inserts can be performed almost at the speed of disk I/O.
20+
This approach has several advantages: All data processing can be [offloaded to background part merges](/docs/en/concepts/why-clickhouse-is-so-fast#storage-layer-merge-time-computation), keeping data writes lightweight and highly efficient. Individual inserts are "local" in the sense that they do not need to update global, i.e. per-table data structures. As a result, multiple simultaneous inserts need no mutual synchronization or synchronization with existing table data, and thus inserts can be performed almost at the speed of disk I/O.
2121

2222
## Storage Layer: Concurrent inserts and selects are isolated
2323

24-
On the other hand, merging parts is a background operation which is invisible to the user, i.e. does not affect concurrent SELECT queries. In fact, this architecture isolates insert and selects so effectively, that many other databases adopted it.
24+
Inserts are fully isolated from SELECT queries, and merging inserted data parts happens in the background without affecting concurrent queries.
2525

2626
## Storage Layer: Merge-time computation
2727

28-
Unlike other databases, ClickHouse is also able to perform additional data transformations during the merge operation. Examples of this include:
28+
Unlike other databases, ClickHouse keeps data writes lightweight and efficient by performing all additional data transformations during the [merge](/docs/en/merges) background process. Examples of this include:
2929

3030
- **Replacing merges** which retain only the most recent version of a row in the input parts and discard all other row versions. Replacing merges can be thought of as a merge-time cleanup operation.
3131

docs/en/engines/table-engines/integrations/s3queue.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ sidebar_position: 181
44
sidebar_label: S3Queue
55
---
66

7+
import ScalePlanFeatureBadge from '@theme/badges/ScalePlanFeatureBadge'
8+
79
# S3Queue Table Engine
810

911
This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem and allows streaming import. This engine is similar to the [Kafka](../../../engines/table-engines/integrations/kafka.md), [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) engines, but provides S3-specific features.
@@ -194,6 +196,8 @@ Engine supports all s3 related settings. For more information about S3 settings
194196

195197
## S3 role-based access
196198

199+
<ScalePlanFeatureBadge feature="S3 Role-Based Access" />
200+
197201
The s3Queue table engine supports role-based access.
198202
Refer to the documentation [here](https://clickhouse.com/docs/en/cloud/security/secure-s3) for steps to configure a role to access your bucket.
199203

0 commit comments

Comments
 (0)