Skip to content

Commit 1becfaa

Browse files
authored
Merge pull request #4499 from Blargian/api_endpoints
Update query API endpoints docs
2 parents 3448b0a + 8ac72f3 commit 1becfaa

File tree

9 files changed

+679
-501
lines changed

9 files changed

+679
-501
lines changed

docs/cloud/features/03_sql_console_features/03_query-endpoints.md

Lines changed: 11 additions & 494 deletions
Large diffs are not rendered by default.

docs/cloud/guides/SQL_console/query-endpoints.md

Lines changed: 645 additions & 0 deletions
Large diffs are not rendered by default.

docs/getting-started/example-datasets/youtube-dislikes.md

Lines changed: 23 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,11 @@ The steps below will easily work on a local install of ClickHouse too. The only
2222

2323
## Step-by-step instructions {#step-by-step-instructions}
2424

25-
1. Let's see what the data looks like. The `s3cluster` table function returns a table, so we can `DESCRIBE` the result:
25+
<VerticalStepper headerLevel="h3">
26+
27+
### Data exploration {#data-exploration}
28+
29+
Let's see what the data looks like. The `s3cluster` table function returns a table, so we can `DESCRIBE` the result:
2630

2731
```sql
2832
DESCRIBE s3(
@@ -59,7 +63,10 @@ ClickHouse infers the following schema from the JSON file:
5963
└─────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
6064
```
6165

62-
2. Based on the inferred schema, we cleaned up the data types and added a primary key. Define the following table:
66+
### Create the table {#create-the-table}
67+
68+
Based on the inferred schema, we cleaned up the data types and added a primary key.
69+
Define the following table:
6370

6471
```sql
6572
CREATE TABLE youtube
@@ -90,7 +97,9 @@ ENGINE = MergeTree
9097
ORDER BY (uploader, upload_date)
9198
```
9299

93-
3. The following command streams the records from the S3 files into the `youtube` table.
100+
### Insert data {#insert-data}
101+
102+
The following command streams the records from the S3 files into the `youtube` table.
94103

95104
:::important
96105
This inserts a lot of data - 4.65 billion rows. If you do not want the entire dataset, simply add a `LIMIT` clause with the desired number of rows.
@@ -133,7 +142,10 @@ Some comments about our `INSERT` command:
133142
- The `upload_date` column contains valid dates, but it also contains strings like "4 hours ago" - which is certainly not a valid date. We decided to store the original value in `upload_date_str` and attempt to parse it with `toDate(parseDateTimeBestEffortUSOrZero(upload_date::String))`. If the parsing fails we just get `0`
134143
- We used `ifNull` to avoid getting `NULL` values in our table. If an incoming value is `NULL`, the `ifNull` function is setting the value to an empty string
135144

136-
4. Open a new tab in the SQL Console of ClickHouse Cloud (or a new `clickhouse-client` window) and watch the count increase. It will take a while to insert 4.56B rows, depending on your server resources. (Without any tweaking of settings, it takes about 4.5 hours.)
145+
### Count the number of rows {#count-row-numbers}
146+
147+
Open a new tab in the SQL Console of ClickHouse Cloud (or a new `clickhouse-client` window) and watch the count increase.
148+
It will take a while to insert 4.56B rows, depending on your server resources. (Without any tweaking of settings, it takes about 4.5 hours.)
137149

138150
```sql
139151
SELECT formatReadableQuantity(count())
@@ -146,7 +158,9 @@ FROM youtube
146158
└─────────────────────────────────┘
147159
```
148160

149-
5. Once the data is inserted, go ahead and count the number of dislikes of your favorite videos or channels. Let's see how many videos were uploaded by ClickHouse:
161+
### Explore the data {#explore-the-data}
162+
163+
Once the data is inserted, go ahead and count the number of dislikes of your favorite videos or channels. Let's see how many videos were uploaded by ClickHouse:
150164

151165
```sql
152166
SELECT count()
@@ -166,7 +180,7 @@ WHERE uploader = 'ClickHouse';
166180
The query above runs so quickly because we chose `uploader` as the first column of the primary key - so it only had to process 237k rows.
167181
:::
168182

169-
6. Let's look and likes and dislikes of ClickHouse videos:
183+
Let's look and likes and dislikes of ClickHouse videos:
170184

171185
```sql
172186
SELECT
@@ -193,7 +207,7 @@ The response looks like:
193207
84 rows in set. Elapsed: 0.013 sec. Processed 155.65 thousand rows, 16.94 MB (11.96 million rows/s., 1.30 GB/s.)
194208
```
195209

196-
7. Here is a search for videos with **ClickHouse** in the `title` or `description` fields:
210+
Here is a search for videos with **ClickHouse** in the `title` or `description` fields:
197211

198212
```sql
199213
SELECT
@@ -224,6 +238,8 @@ The results look like:
224238
│ 3534 │ 62 │ 1 │ https://youtu.be/8nWRhK9gw10 │ CLICKHOUSE - Arquitetura Modular │
225239
```
226240

241+
</VerticalStepper>
242+
227243
## Questions {#questions}
228244

229245
### If someone disables comments does it lower the chance someone will actually click like or dislike? {#if-someone-disables-comments-does-it-lower-the-chance-someone-will-actually-click-like-or-dislike}
45.1 KB
Loading
91.2 KB
Loading
-75.2 KB
Loading
135 KB
Loading
128 KB
Loading
249 KB
Loading

0 commit comments

Comments
 (0)