Skip to content

Commit 2dadf61

Browse files
committed
chore: apply npm run lint --prefix specification -- --fix
1 parent 17cb7bf commit 2dadf61

File tree

420 files changed

+1064
-1554
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

420 files changed

+1064
-1554
lines changed

specification/_global/bulk/BulkRequest.ts

Lines changed: 34 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -31,22 +31,18 @@ import { OperationContainer, UpdateAction } from './types'
3131

3232
/**
3333
* Bulk index or delete documents.
34-
* Perform multiple `index`, `create`, `delete`, and `update` actions in a single request.
35-
* This reduces overhead and can greatly increase indexing speed.
36-
*
37-
* If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:
3834
*
35+
* Perform multiple `index`, `create`, `delete`, and `update` actions in a single request.
36+
* This reduces overhead and can greatly increase indexing speed.\n
37+
* If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:\n
3938
* * To use the `create` action, you must have the `create_doc`, `create`, `index`, or `write` index privilege. Data streams support only the `create` action.
4039
* * To use the `index` action, you must have the `create`, `index`, or `write` index privilege.
4140
* * To use the `delete` action, you must have the `delete` or `write` index privilege.
4241
* * To use the `update` action, you must have the `index` or `write` index privilege.
4342
* * To automatically create a data stream or index with a bulk API request, you must have the `auto_configure`, `create_index`, or `manage` index privilege.
44-
* * To make the result of a bulk operation visible to search using the `refresh` parameter, you must have the `maintenance` or `manage` index privilege.
45-
*
46-
* Automatic data stream creation requires a matching index template with data stream enabled.
47-
*
48-
* The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:
49-
*
43+
* * To make the result of a bulk operation visible to search using the `refresh` parameter, you must have the `maintenance` or `manage` index privilege.\n
44+
* Automatic data stream creation requires a matching index template with data stream enabled.\n
45+
* The actions are specified in the request body using a newline delimited JSON (NDJSON) structure:\n
5046
* ```
5147
* action_and_meta_data\n
5248
* optional_source\n
@@ -55,93 +51,65 @@ import { OperationContainer, UpdateAction } from './types'
5551
* ....
5652
* action_and_meta_data\n
5753
* optional_source\n
58-
* ```
59-
*
54+
* ```\n
6055
* The `index` and `create` actions expect a source on the next line and have the same semantics as the `op_type` parameter in the standard index API.
6156
* A `create` action fails if a document with the same ID already exists in the target
62-
* An `index` action adds or replaces a document as necessary.
63-
*
57+
* An `index` action adds or replaces a document as necessary.\n
6458
* NOTE: Data streams support only the `create` action.
65-
* To update or delete a document in a data stream, you must target the backing index containing the document.
66-
*
67-
* An `update` action expects that the partial doc, upsert, and script and its options are specified on the next line.
68-
*
69-
* A `delete` action does not expect a source on the next line and has the same semantics as the standard delete API.
70-
*
59+
* To update or delete a document in a data stream, you must target the backing index containing the document.\n
60+
* An `update` action expects that the partial doc, upsert, and script and its options are specified on the next line.\n
61+
* A `delete` action does not expect a source on the next line and has the same semantics as the standard delete API.\n
7162
* NOTE: The final line of data must end with a newline character (`\n`).
7263
* Each newline character may be preceded by a carriage return (`\r`).
7364
* When sending NDJSON data to the `_bulk` endpoint, use a `Content-Type` header of `application/json` or `application/x-ndjson`.
74-
* Because this format uses literal newline characters (`\n`) as delimiters, make sure that the JSON actions and sources are not pretty printed.
75-
*
76-
* If you provide a target in the request path, it is used for any actions that don't explicitly specify an `_index` argument.
77-
*
65+
* Because this format uses literal newline characters (`\n`) as delimiters, make sure that the JSON actions and sources are not pretty printed.\n
66+
* If you provide a target in the request path, it is used for any actions that don't explicitly specify an `_index` argument.\n
7867
* A note on the format: the idea here is to make processing as fast as possible.
79-
* As some of the actions are redirected to other shards on other nodes, only `action_meta_data` is parsed on the receiving node side.
80-
*
81-
* Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.
82-
*
68+
* As some of the actions are redirected to other shards on other nodes, only `action_meta_data` is parsed on the receiving node side.\n
69+
* Client libraries using this protocol should try and strive to do something similar on the client side, and reduce buffering as much as possible.\n
8370
* There is no "correct" number of actions to perform in a single bulk request.
8471
* Experiment with different settings to find the optimal size for your particular workload.
8572
* Note that Elasticsearch limits the maximum size of a HTTP request to 100mb by default so clients must ensure that no request exceeds this size.
8673
* It is not possible to index a single document that exceeds the size limit, so you must pre-process any such documents into smaller pieces before sending them to Elasticsearch.
87-
* For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.
88-
*
89-
* **Client suppport for bulk requests**
90-
*
91-
* Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:
92-
*
74+
* For instance, split documents into pages or chapters before indexing them, or store raw binary data in a system outside Elasticsearch and replace the raw data with a link to the external system in the documents that you send to Elasticsearch.\n
75+
* **Client suppport for bulk requests**\n
76+
* Some of the officially supported clients provide helpers to assist with bulk requests and reindexing:\n
9377
* * Go: Check out `esutil.BulkIndexer`
9478
* * Perl: Check out `Search::Elasticsearch::Client::5_0::Bulk` and `Search::Elasticsearch::Client::5_0::Scroll`
9579
* * Python: Check out `elasticsearch.helpers.*`
9680
* * JavaScript: Check out `client.helpers.*`
9781
* * .NET: Check out `BulkAllObservable`
9882
* * PHP: Check out bulk indexing.
99-
* * Ruby: Check out `Elasticsearch::Helpers::BulkHelper`
100-
*
101-
* **Submitting bulk requests with cURL**
102-
*
83+
* * Ruby: Check out `Elasticsearch::Helpers::BulkHelper`\n
84+
* **Submitting bulk requests with cURL**\n
10385
* If you're providing text file input to `curl`, you must use the `--data-binary` flag instead of plain `-d`.
104-
* The latter doesn't preserve newlines. For example:
105-
*
86+
* The latter doesn't preserve newlines. For example:\n
10687
* ```
10788
* $ cat requests
10889
* { "index" : { "_index" : "test", "_id" : "1" } }
10990
* { "field1" : "value1" }
11091
* $ curl -s -H "Content-Type: application/x-ndjson" -XPOST localhost:9200/_bulk --data-binary "@requests"; echo
11192
* {"took":7, "errors": false, "items":[{"index":{"_index":"test","_id":"1","_version":1,"result":"created","forced_refresh":false}}]}
112-
* ```
113-
*
114-
* **Optimistic concurrency control**
115-
*
93+
* ```\n
94+
* **Optimistic concurrency control**\n
11695
* Each `index` and `delete` action within a bulk API call may include the `if_seq_no` and `if_primary_term` parameters in their respective action and meta data lines.
117-
* The `if_seq_no` and `if_primary_term` parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.
118-
*
119-
* **Versioning**
120-
*
96+
* The `if_seq_no` and `if_primary_term` parameters control how operations are run, based on the last modification to existing documents. See Optimistic concurrency control for more details.\n
97+
* **Versioning**\n
12198
* Each bulk item can include the version value using the `version` field.
12299
* It automatically follows the behavior of the index or delete operation based on the `_version` mapping.
123-
* It also support the `version_type`.
124-
*
125-
* **Routing**
126-
*
100+
* It also support the `version_type`.\n
101+
* **Routing**\n
127102
* Each bulk item can include the routing value using the `routing` field.
128-
* It automatically follows the behavior of the index or delete operation based on the `_routing` mapping.
129-
*
130-
* NOTE: Data streams do not support custom routing unless they were created with the `allow_custom_routing` setting enabled in the template.
131-
*
132-
* **Wait for active shards**
133-
*
134-
* When making bulk calls, you can set the `wait_for_active_shards` parameter to require a minimum number of shard copies to be active before starting to process the bulk request.
135-
*
136-
* **Refresh**
137-
*
138-
* Control when the changes made by this request are visible to search.
139-
*
103+
* It automatically follows the behavior of the index or delete operation based on the `_routing` mapping.\n
104+
* NOTE: Data streams do not support custom routing unless they were created with the `allow_custom_routing` setting enabled in the template.\n
105+
* **Wait for active shards**\n
106+
* When making bulk calls, you can set the `wait_for_active_shards` parameter to require a minimum number of shard copies to be active before starting to process the bulk request.\n
107+
* **Refresh**\n
108+
* Control when the changes made by this request are visible to search.\n
140109
* NOTE: Only the shards that receive the bulk request will be affected by refresh.
141110
* Imagine a `_bulk?refresh=wait_for` request with three documents in it that happen to be routed to different shards in an index with five shards.
142111
* The request will only wait for those three shards to refresh.
143-
* The other two shards that make up the index do not participate in the `_bulk` request at all.
144-
*
112+
* The other two shards that make up the index do not participate in the `_bulk` request at all.\n
145113
* You might want to disable the refresh interval temporarily to improve indexing throughput for large bulk requests.
146114
* Refer to the linked documentation for step-by-step instructions using the index settings API.
147115
* @rest_spec_name bulk
@@ -150,7 +118,6 @@ import { OperationContainer, UpdateAction } from './types'
150118
* @doc_id docs-bulk
151119
* @ext_doc_id indices-refresh-disable
152120
* @doc_tag document
153-
*
154121
*/
155122
export interface Request<TDocument, TPartialDocument> extends RequestBase {
156123
urls: [

specification/_global/clear_scroll/ClearScrollRequest.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ import { ScrollIds } from '@_types/common'
2222

2323
/**
2424
* Clear a scrolling search.
25+
*
2526
* Clear the search context and results for a scrolling search.
2627
* @rest_spec_name clear_scroll
2728
* @availability stack stability=stable

specification/_global/close_point_in_time/ClosePointInTimeRequest.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ import { Id } from '@_types/common'
2222

2323
/**
2424
* Close a point in time.
25+
*
2526
* A point in time must be opened explicitly before being used in search requests.
2627
* The `keep_alive` parameter tells Elasticsearch how long it should persist.
2728
* A point in time is automatically closed when the `keep_alive` period has elapsed.

specification/_global/count/CountRequest.ts

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,13 +30,11 @@ import { Operator } from '@_types/query_dsl/Operator'
3030

3131
/**
3232
* Count search results.
33-
* Get the number of documents matching a query.
3433
*
34+
* Get the number of documents matching a query.\n
3535
* The query can be provided either by using a simple query string as a parameter, or by defining Query DSL within the request body.
36-
* The query is optional. When no query is provided, the API uses `match_all` to count all the documents.
37-
*
38-
* The count API supports multi-target syntax. You can run a single count API search across multiple data streams and indices.
39-
*
36+
* The query is optional. When no query is provided, the API uses `match_all` to count all the documents.\n
37+
* The count API supports multi-target syntax. You can run a single count API search across multiple data streams and indices.\n
4038
* The operation is broadcast across all shards.
4139
* For each shard ID group, a replica is chosen and the search is run against it.
4240
* This means that replicas increase the scalability of the count.

0 commit comments

Comments
 (0)