Skip to content

Commit 4d9d8e2

Browse files
committed
Update query-performance.md
1 parent 74608d6 commit 4d9d8e2

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

docs/use-cases/time-series/05_query-performance.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ doc_type: 'guide'
1010

1111
# Time-series query performance
1212

13-
After optimizing storage, the next step is improving query performance.
14-
This section explores two key techniques: optimizing `ORDER BY` keys and using materialized views.
13+
After optimizing storage, the next step is improving query performance.
14+
This section explores two key techniques: optimizing `ORDER BY` keys and using materialized views.
1515
We'll see how these approaches can reduce query times from seconds to milliseconds.
1616

17-
## Optimize ORDER BY keys {#time-series-optimize-order-by}
17+
## Optimize `ORDER BY` keys {#time-series-optimize-order-by}
1818

19-
Before attempting other optimizations, you should optimize their ordering key to ensure ClickHouse produces the fastest possible results.
20-
Choosing the key right largely depends on the queries you're going to run. Suppose most of our queries filter by `project` and `subproject` columns.
21-
In this case, its a good idea to add them to the ordering key - as well as the time column since we query on time as well:
19+
Before attempting other optimizations, you should optimize ordering keys to ensure ClickHouse produces the fastest possible results.
20+
Choosing the right key largely depends on the queries you're going to run. Suppose most of our queries filter by the `project` and `subproject` columns.
21+
In this case, it's a good idea to add them to the ordering key as well as the `time` column since we query on time as well.
2222

2323
Let's create another version of the table that has the same column types as `wikistat`, but is ordered by `(project, subproject, time)`.
2424

@@ -171,7 +171,7 @@ GROUP BY path, month;
171171

172172
This destination table will only be populated when new records are inserted into the `wikistat` table, so we need to do some [backfilling](/docs/data-modeling/backfilling).
173173

174-
The easiest way to do this is using an [`INSERT INTO SELECT`](/docs/sql-reference/statements/insert-into#inserting-the-results-of-select) statement to insert directly into the materialized view's target table [using](https://github.com/ClickHouse/examples/tree/main/ClickHouse_vs_ElasticSearch/DataAnalytics#variant-1---directly-inserting-into-the-target-table-by-using-the-materialized-views-transformation-query) the view's SELECT query (transformation) :
174+
The easiest way to do this is using an [`INSERT INTO SELECT`](/docs/sql-reference/statements/insert-into#inserting-the-results-of-select) statement to insert directly into the materialized view's target table [using](https://github.com/ClickHouse/examples/tree/main/ClickHouse_vs_ElasticSearch/DataAnalytics#variant-1---directly-inserting-into-the-target-table-by-using-the-materialized-views-transformation-query) the view's `SELECT` query (transformation):
175175

176176
```sql
177177
INSERT INTO wikistat_top
@@ -187,10 +187,10 @@ Depending on the cardinality of the raw data set (we have 1 billion rows!), this
187187

188188
* Creating a temporary table with a Null table engine
189189
* Connecting a copy of the normally used materialized view to that temporary table
190-
* Using an INSERT INTO SELECT query, copying all data from the raw data set into that temporary table
190+
* Using an `INSERT INTO SELECT` query, copying all data from the raw data set into that temporary table
191191
* Dropping the temporary table and the temporary materialized view.
192192

193-
With that approach, rows from the raw data set are copied block-wise into the temporary table (which doesn't store any of these rows), and for each block of rows, a partial state is calculated and written to the target table, where these states are incrementally merged in the background.
193+
With this approach, rows from the raw data set are copied block-wise into the temporary table (which doesn't store any of these rows), and for each block of rows, a partial state is calculated and written to the target table, where these states are incrementally merged in the background.
194194

195195
```sql
196196
CREATE TABLE wikistat_backfill
@@ -261,5 +261,5 @@ LIMIT 10;
261261
10 rows in set. Elapsed: 0.004 sec.
262262
```
263263

264-
Our performance improvement here is dramatic.
264+
Our performance improvement here is dramatic.
265265
Before it took just over 2 seconds to compute the answer to this query and now it takes only 4 milliseconds.

0 commit comments

Comments
 (0)