You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/best-practices/_snippets/_avoid_optimize_final.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ Normally, ClickHouse avoids merging parts larger than ~150 GB (configurable via
44
44
45
45
* It may try to merge **multiple 150 GB parts** into one massive part
46
46
* This could result in **long merge times**, **memory pressure**, or even **out-of-memory errors**
47
-
* These large parts may become challenging to merge i.e. attempts to merge them further fails for the reasons stated above. In cases where merges are required for correct query time behavior, this can result in undesired consequences e.g. [duplicates accumulating for a ReplacingMergeTree](/guides/developer/deduplication#using-replacingmergetree-for-upserts), increasing query time performance.
47
+
* These large parts may become challenging to merge, i.e. attempts to merge them further fails for the reasons stated above. In cases where merges are required for correct query time behavior, this can result in undesired consequences such as [duplicates accumulating for a ReplacingMergeTree](/guides/developer/deduplication#using-replacingmergetree-for-upserts), diminishing query time performance.
48
48
49
49
## Let background merges do the work {#let-background-merges-do-the-work}
0 commit comments