Skip to content

Commit 939b22b

Browse files
authored
update documentation for 7.7 (#4653)
1 parent f23628f commit 939b22b

33 files changed

+14
-133
lines changed

docs/aggregations/bucket/parent/parent-aggregation-usage.asciidoc

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,6 @@ new ParentAggregation("name_of_parent_agg", typeof(CommitActivity)) <1>
4747
}
4848
----
4949
<1> `join` field is determined from the _child_ type. In this example, it is `CommitActivity`
50-
5150
<2> sub-aggregations are on the type determined from the `join` field. In this example, a `Project` is a parent of `CommitActivity`
5251

5352
[source,javascript]

docs/aggregations/writing-aggregations.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -232,7 +232,6 @@ return s => s
232232
);
233233
----
234234
<1> a list of aggregation functions to apply
235-
236235
<2> Using LINQ's `Aggregate()` function to accumulate/apply all of the aggregation functions
237236

238237
[[handling-aggregate-response]]
@@ -276,6 +275,5 @@ var maxPerChild = childAggregation.Max("max_per_child");
276275
maxPerChild.Should().NotBeNull(); <2>
277276
----
278277
<1> Do something with the average per child. Here we just assert it's not null
279-
280278
<2> Do something with the max per child. Here we just assert it's not null
281279

docs/client-concepts/connection-pooling/building-blocks/connection-pooling.asciidoc

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,6 @@ var pool = new CloudConnectionPool(cloudId, credentials); <2>
9797
var client = new ElasticClient(new ConnectionSettings(pool));
9898
----
9999
<1> a username and password that can access Elasticsearch service on Elastic Cloud
100-
101100
<2> `cloudId` is a value that can be retrieved from the Elastic Cloud web console
102101

103102
This type of pool, like its parent the `SingleNodeConnectionPool`, is hardwired to opt out of

docs/client-concepts/connection-pooling/exceptions/unexpected-exceptions.asciidoc

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -58,11 +58,8 @@ audit = await audit.TraceUnexpectedException(
5858
);
5959
----
6060
<1> set up a cluster with 10 nodes
61-
6261
<2> where node 2 on port 9201 always throws an exception
63-
6462
<3> The first call to 9200 returns a healthy response
65-
6663
<4> ...but the second call, to 9201, returns a bad response
6764

6865
Sometimes, an unexpected exception happens further down in the pipeline. In this scenario, we
@@ -101,9 +98,7 @@ audit = await audit.TraceUnexpectedException(
10198
);
10299
----
103100
<1> calls on 9200 set up to throw a `HttpRequestException` or a `WebException`
104-
105101
<2> calls on 9201 set up to throw an `Exception`
106-
107102
<3> Assert that the audit trail for the client call includes the bad response from 9200 and 9201
108103

109104
An unexpected hard exception on ping and sniff is something we *do* try to recover from and failover to retrying on the next node.
@@ -148,8 +143,6 @@ audit = await audit.TraceUnexpectedException(
148143
);
149144
----
150145
<1> `InnerException` is the exception that brought the request down
151-
152146
<2> The hard exception that happened on ping is still available though
153-
154147
<3> An exception can be hard to relate back to a point in time, so the exception is also available on the audit trail
155148

docs/client-concepts/connection-pooling/exceptions/unrecoverable-exceptions.asciidoc

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,6 @@ var audit = new Auditor(() => VirtualClusterWith
8181
);
8282
----
8383
<1> Always succeed on ping
84-
8584
<2> ...but always fail on calls with a 401 Bad Authentication response
8685

8786
Now, let's make a client call. We'll see that the first audit event is a successful ping
@@ -102,9 +101,7 @@ audit = await audit.TraceElasticsearchException(
102101
);
103102
----
104103
<1> First call results in a successful ping
105-
106104
<2> Second call results in a bad response
107-
108105
<3> The reason for the bad response is Bad Authentication
109106

110107
When a bad authentication response occurs, the client attempts to deserialize the response body returned;
@@ -138,7 +135,6 @@ audit = await audit.TraceElasticsearchException(
138135
);
139136
----
140137
<1> Always return a 401 bad response with a HTML response on client calls
141-
142138
<2> Assert that the response body bytes are null
143139

144140
Now in this example, by turning on `DisableDirectStreaming()` on `ConnectionSettings`, we see the same behaviour exhibited
@@ -173,6 +169,5 @@ audit = await audit.TraceElasticsearchException(
173169
);
174170
----
175171
<1> Response bytes are set on the response
176-
177172
<2> Assert that the response contains `"nginx/"`
178173

docs/client-concepts/connection-pooling/max-retries/respects-max-retry.asciidoc

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,6 @@ audit = await audit.TraceCall(
8484
);
8585
----
8686
<1> Set the maximum number of retries to 3
87-
8887
<2> The client call trace returns an `MaxRetriesReached` audit after the initial attempt and the number of retries allowed
8988

9089
In our previous example we simulated very fast failures, but in the real world, a call might take upwards of a second.

docs/client-concepts/connection-pooling/pinging/first-usage.asciidoc

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -92,13 +92,9 @@ await audit.TraceCalls(
9292
);
9393
----
9494
<1> The first call goes to 9200, which succeeds
95-
9695
<2> The 2nd call does a ping on 9201 because its used for the first time. This fails
97-
9896
<3> So we ping 9202. This _also_ fails
99-
10097
<4> We then ping 9203 because we haven't used it before and it succeeds
101-
10298
<5> Finally, we assert that the connection pool has two nodes that are marked as dead
10399

104100
All nodes are pinged on first use, provided they are healthy
@@ -125,6 +121,5 @@ await audit.TraceCalls(
125121
);
126122
----
127123
<1> Pings on nodes always succeed
128-
129124
<2> A successful ping on each node
130125

docs/client-concepts/connection-pooling/request-overrides/disable-sniff-ping-per-request.asciidoc

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -65,11 +65,8 @@ audit = await audit.TraceCalls(
6565
);
6666
----
6767
<1> disable sniffing
68-
6968
<2> first call is a successful ping
70-
7169
<3> sniff on startup call happens here, on the second call
72-
7370
<4> No sniff on startup again
7471

7572
Now, let's disable pinging on the request
@@ -93,7 +90,6 @@ audit = await audit.TraceCall(
9390
);
9491
----
9592
<1> disable ping
96-
9793
<2> No ping after sniffing
9894

9995
Finally, let's demonstrate disabling both sniff and ping on the request
@@ -115,6 +111,5 @@ audit = await audit.TraceCall(
115111
);
116112
----
117113
<1> disable ping and sniff
118-
119114
<2> no ping or sniff before the call
120115

docs/client-concepts/connection-pooling/round-robin/skip-dead-nodes.asciidoc

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -140,9 +140,7 @@ await audit.TraceCalls(
140140
);
141141
----
142142
<1> The first call goes to 9200 which succeeds
143-
144143
<2> The 2nd call does a ping on 9201 because its used for the first time. It fails so we wrap over to node 9202
145-
146144
<3> The next call goes to 9203 which fails so we should wrap over
147145

148146
A cluster with 2 nodes where the second node fails on ping
@@ -193,6 +191,5 @@ await audit.TraceCalls(
193191
);
194192
----
195193
<1> All the calls fail
196-
197194
<2> After all our registered nodes are marked dead we want to sample a single dead node each time to quickly see if the cluster is back up. We do not want to retry all 4 nodes
198195

docs/client-concepts/connection-pooling/sniffing/on-connection-failure.asciidoc

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -79,13 +79,9 @@ audit = await audit.TraceCalls(
7979
);
8080
----
8181
<1> When the call fails on 9201, the following sniff succeeds and returns a new cluster state of healthy nodes. This cluster only has 3 nodes and the known masters are 9200 and 9202. A search on 9201 is setup to still fail once
82-
8382
<2> After this second failure on 9201, another sniff will happen which returns a cluster state that no longer fails but looks completely different; It's now three nodes on ports 9210 - 9212, with 9210 and 9212 being master eligible.
84-
8583
<3> We assert we do a sniff on our first known master node 9202 after the failed call on 9201
86-
8784
<4> Our pool should now have three nodes
88-
8985
<5> We assert we do a sniff on the first master node in our updated cluster
9086

9187
==== Sniffing after ping failure
@@ -151,11 +147,8 @@ audit = await audit.TraceCalls(
151147
);
152148
----
153149
<1> We assert we do a sniff on our first known master node 9202
154-
155150
<2> Our pool should now have three nodes
156-
157151
<3> We assert we do a sniff on the first master node in our updated cluster
158-
159152
<4> 9210 was already pinged after the sniff returned the new nodes
160153

161154
==== Client uses publish address

0 commit comments

Comments
 (0)