Skip to content

Commit abce100

Browse files
authored
Merge pull request #3570 from ClickHouse/update-pg-pipe-public
Postgres pipe: Update status in a few texts to beta
2 parents 1770c0d + 869126b commit abce100

File tree

2 files changed

+5
-5
lines changed

2 files changed

+5
-5
lines changed

docs/integrations/data-ingestion/dbms/postgresql/connecting-to-postgresql.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,15 @@ import ExperimentalBadge from '@theme/badges/ExperimentalBadge';
1212

1313
This page covers following options for integrating PostgreSQL with ClickHouse:
1414

15-
- using [ClickPipes](/integrations/clickpipes/postgres), the managed integration service for ClickHouse Cloud - now in Private Preview. Please [sign up here](https://clickpipes.peerdb.io/)
15+
- using [ClickPipes](/integrations/clickpipes/postgres), the managed integration service for ClickHouse Cloud - now in public beta. Please [sign up here](https://clickpipes.peerdb.io/)
1616
- using `PeerDB by ClickHouse`, a CDC tool specifically designed for PostgreSQL database replication to both self-hosted ClickHouse and ClickHouse Cloud
17-
- PeerDB is now available natively in ClickHouse Cloud - Blazing-fast Postgres to ClickHouse CDC with our [new ClickPipe connector](/integrations/clickpipes/postgres) - now in Private Preview. Please [sign up here](https://clickpipes.peerdb.io/)
17+
- PeerDB is now available natively in ClickHouse Cloud - Blazing-fast Postgres to ClickHouse CDC with our [new ClickPipe connector](/integrations/clickpipes/postgres) - now in public beta. Please [sign up here](https://clickpipes.peerdb.io/)
1818
- using the `PostgreSQL` table engine, for reading from a PostgreSQL table
1919
- using the experimental `MaterializedPostgreSQL` database engine, for syncing a database in PostgreSQL with a database in ClickHouse
2020

2121
## Using ClickPipes (powered by PeerDB) {#using-clickpipes-powered-by-peerdb}
2222

23-
PeerDB is now available natively in ClickHouse Cloud - Blazing-fast Postgres to ClickHouse CDC with our [new ClickPipe connector](/integrations/clickpipes/postgres) - now in Private Preview. Please [sign up here](https://clickpipes.peerdb.io/)
23+
PeerDB is now available natively in ClickHouse Cloud - Blazing-fast Postgres to ClickHouse CDC with our [new ClickPipe connector](/integrations/clickpipes/postgres) - now in public beta. Please [sign up here](https://clickpipes.peerdb.io/)
2424

2525
## Using the PostgreSQL Table Engine {#using-the-postgresql-table-engine}
2626

docs/integrations/data-ingestion/dbms/postgresql/inserting-data.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ We recommend reading [this guide](/guides/inserting-data) to learn best practice
99

1010
For bulk loading data from PostgreSQL, users can use:
1111

12-
- using [ClickPipes](/integrations/clickpipes/postgres), the managed integration service for ClickHouse Cloud - now in Private Preview. Please [sign up here](https://clickpipes.peerdb.io/)
12+
- using [ClickPipes](/integrations/clickpipes/postgres), the managed integration service for ClickHouse Cloud - now in public beta. Please [sign up here](https://clickpipes.peerdb.io/)
1313
- `PeerDB by ClickHouse`, an ETL tool specifically designed for PostgreSQL database replication to both self-hosted ClickHouse and ClickHouse Cloud.
14-
- PeerDB is now available natively in ClickHouse Cloud - Blazing-fast Postgres to ClickHouse CDC with our [new ClickPipe connector](/integrations/clickpipes/postgres) - now in Private Preview. Please [sign up here](https://clickpipes.peerdb.io/)
14+
- PeerDB is now available natively in ClickHouse Cloud - Blazing-fast Postgres to ClickHouse CDC with our [new ClickPipe connector](/integrations/clickpipes/postgres) - now in public beta. Please [sign up here](https://clickhouse.com/cloud/clickpipes/postgres-cdc-connector)
1515
- The [Postgres Table Function](/sql-reference/table-functions/postgresql) to read data directly. This is typically appropriate for if batch replication based on a known watermark, e.g. a timestamp. is sufficient or if it's a once-off migration. This approach can scale to 10's of millions of rows. Users looking to migrate larger datasets should consider multiple requests, each dealing with a chunk of the data. Staging tables can be used for each chunk prior to its partitions being moved to a final table. This allows failed requests to be retried. For further details on this bulk-loading strategy, see here.
1616
- Data can be exported from Postgres in CSV format. This can then be inserted into ClickHouse from either local files or via object storage using table functions.

0 commit comments

Comments
 (0)