You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As you learned in the [guide](/architecture/horizontal-scaling) for setting up a cluster with 2 shards and 1 replica, distributed tables are tables which have access to shards located on different
668
+
hosts and are defined using the `Distributed` table engine.
669
+
The distributed table acts as the interface across all the shards in the cluster.
670
670
671
671
From any of the host clients, run the following query to create a distributed table
672
-
using the existing table we created previously with `ON CLUSTER` and use of the
673
-
`ReplicatedMergeTree`:
672
+
using the existing replicated table we created in the previous step:
674
673
675
674
```sql
676
675
CREATETABLEIF NOT EXISTS uk.uk_price_paid_distributed
@@ -749,4 +748,16 @@ SELECT count(*) FROM uk.uk_price_paid_local;
749
748
└──────────┘
750
749
```
751
750
752
-
</VerticalStepper>
751
+
</VerticalStepper>
752
+
753
+
## Conclusion {#conclusion}
754
+
755
+
The advantage of this cluster topology with 2 shards and 2 replicas is that it provides both scalability and fault tolerance.
756
+
Data is distributed across separate hosts, reducing storage and I/O requirements per node, while queries are processed in parallel across both shards for improved performance and memory efficiency.
757
+
Critically, the cluster can tolerate the loss of one node and continue serving queries without interruption, as each shard has a backup replica available on another node.
758
+
759
+
The main disadvantage of this cluster topology is the increased storage overhead—it requires twice the storage capacity compared to a setup without replicas, as each shard is duplicated.
760
+
Additionally, while the cluster can survive a single node failure, losing two nodes simultaneously may render the cluster inoperable, depending on which nodes fail and how shards are distributed.
761
+
This topology strikes a balance between availability and cost, making it suitable for production environments where some level of fault tolerance is required without the expense of higher replication factors.
762
+
763
+
To learn how ClickHouse Cloud processes queries, offering both scalability and fault-tolerance, see the section ["Parallel Replicas"](/deployment-guides/parallel-replicas).
Copy file name to clipboardExpand all lines: docs/deployment-guides/replication-sharding-examples/_snippets/_working_example.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,5 +2,5 @@
2
2
The following steps will walk you through setting up the cluster from
3
3
scratch. If you prefer to skip these steps and jump straight to running the
4
4
cluster, you can obtain the example
5
-
files from the [examples repository](https://github.com/ClickHouse/examples/tree/main/docker-compose-recipes)
5
+
files from the examples repository['docker-compose-recipes' directory](https://github.com/ClickHouse/examples/tree/main/docker-compose-recipes/recipes).
0 commit comments