You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/guides/inserting-data.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -137,7 +137,7 @@ Unlike many traditional databases, ClickHouse supports an HTTP interface.
137
137
Users can use this for both inserting and querying data, using any of the above formats.
138
138
This is often preferable to ClickHouse's native protocol as it allows traffic to be easily switched with load balancers.
139
139
We expect small differences in insert performance with the native protocol, which incurs a little less overhead.
140
-
Existing clients use either of these protocols (in some cases both e.g. the Go client).
140
+
Existing clients use either of these protocols (in some cases both e.g. the Go client).
141
141
The native protocol does allow query progress to be easily tracked.
142
142
143
143
See [HTTP Interface](/interfaces/http) for further details.
@@ -149,7 +149,7 @@ For loading data from Postgres, users can use:
149
149
-`PeerDB by ClickHouse`, an ETL tool specifically designed for PostgreSQL database replication. This is available in both:
150
150
- ClickHouse Cloud - available through our [new connector](/integrations/clickpipes/postgres) in ClickPipes, our managed ingestion service.
151
151
- Self-managed - via the [open-source project](https://github.com/PeerDB-io/peerdb).
152
-
- The [PostgreSQL table engine](/integrations/postgresql#using-the-postgresql-table-engine) to read data directly as shown in previous examples. Typically appropriate if batch replication based on a known watermark, e.g., timestamp, is sufficient or if it's a one-off migration. This approach can scale to 10's millions of rows. Users looking to migrate larger datasets should consider multiple requests, each dealing with a chunk of the data. Staging tables can be used for each chunk prior to its partitions being moved to a final table. This allows failed requests to be retried. For further details on this bulk-loading strategy, see here.
152
+
- The [PostgreSQL table engine](/integrations/postgresql#using-the-postgresql-table-engine) to read data directly as shown in previous examples. Typically appropriate if batch replication based on a known watermark, e.g., timestamp, is sufficient or if it's a one-off migration. This approach can scale to 10's of millions of rows. Users looking to migrate larger datasets should consider multiple requests, each dealing with a chunk of the data. Staging tables can be used for each chunk prior to its partitions being moved to a final table. This allows failed requests to be retried. For further details on this bulk-loading strategy, see here.
153
153
- Data can be exported from PostgreSQL in CSV format. This can then be inserted into ClickHouse from either local files or via object storage using table functions.
0 commit comments