You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-`mutation: number` | assigned weight to mutations | defaults to 10
78
78
-`query: number` | assigned weight of a query | defaults to 1
@@ -81,7 +81,7 @@ app.use(
81
81
82
82
-`depthLimit: number` | throttle queies by the depth of the nested stucture | defaults to `Infinity` (ie. no limit)
83
83
-`enforceBoundedLists: boolean` | if true, an error will be thrown if any lists types are not bound by slicing arguments [`first`, `last`, `limit`] or directives | defaults to `false`
84
-
-`dark: boolean` | if true, the package will calculate complexity, depth and tokens but not throttle any queries. Use this to dark launch the package and monitor what would happen if rate limiting was added to yaur application
84
+
-`dark: boolean` | if true, the package will calculate complexity, depth and tokens but not throttle any queries. Use this to dark launch the package and monitor the rate limiter's impact without limiting user requests.
85
85
86
86
All configuration options
87
87
@@ -113,27 +113,27 @@ app.use(
113
113
114
114
## <aname="lists"></a> Notes on Lists
115
115
116
-
The complexity for list types can be set in the schema with the use of directives, or in the query by the varibales passed to the field as slicing arguments.
116
+
For queries that return a list, the complexity can be determined with schema directives, or by providing a slicing argument to the query (`first`, `last`, `limit).
117
117
118
-
1. Slicing arguments: lists must be bounded by one integer slicing argument in order to calculate the comlexity for the field. This package supports the slicing arguments `first`, `last` and `limit`. The complexity of the list will be the value passed as the argument to the field.
118
+
1. Slicing arguments: lists must be bounded by one integer slicing argument in order to calculate the complexity for the field. This package supports the slicing arguments `first`, `last` and `limit`. The complexity of the list will be the value passed as the argument to the field.
119
119
120
120
2. Directives: ... TODO ...
121
121
122
122
123
123
## <aname="how-it-works"></a> How It Works
124
124
125
-
Rate-limiting is done by IP address.
125
+
Requests are rate-limited based on the IP address associated with the request.
126
126
127
-
On server start, the GraphQL (GQL) schema is parsed to build an object that maps GQL types/fields to values corresponding to the weights assigned to each GQL type/field. This object is used internally to cross reference the fields queried by the user with the weight to apply that field when totaling the overall complexity of the query.
127
+
On server start, the GraphQL (GQL) schema is parsed to build an object that maps GQL types/fields to their corresponding weights. Type weights can be provided during <ahref="typeWeights">initial configuration</a>. When a request is received, this object is used to cross reference the fields queried by the user and compute the complexity of each field. The total complexity of the request is the sum of these values.
128
128
129
-
For each request, the query is parsed and traversed to total the overall complexity of the query based on the type/field weights configured on setup. This is done statically (before any resolvers are called) to estimate the upper bound of response size of the request - a proxy for the work done by the server to build the response. The total complexity is then used to allow/block the request based on popular rate-limiting algorithms.
129
+
Complexity is determined, statically (before any resolvers are called) to estimate the upper bound of the response size - a proxy for the work done by the server to build the response. The total complexity is then used to allow/block the request based on popular rate-limiting algorithms.
130
130
131
-
If a user sends two request simustaneously, the trailing request will wait for the first one to complete any asyncronous work before being processed.
131
+
Requests for each user are processed sequentially by the rate limiter.
132
132
133
133
Example (with default weights):
134
134
135
135
```javascript
136
-
query { // 1
136
+
query { // 1 (complexity)
137
137
hero (episode:EMPIRE) { // 1
138
138
name // 0
139
139
id // 0
@@ -152,13 +152,13 @@ query { // 1
152
152
153
153
## <aname="response"></a> Response
154
154
155
-
1. Blocked Requests: blocked requests recieve a response with,
155
+
1.<b>Blocked Requests</b>: blocked requests recieve a response with,
156
156
157
-
- status of `429` for `Too Many Requests`
157
+
- status of `429` for `Too Many Requests`
158
158
-`Retry-After` header with a value of the time to wait in seconds before the request would be approved (`Infinity` if the complexity is greater than rate-limiting capacity).
159
-
- A JSON response with the `tokens` available, `complexity` of the query, `depth` of the query, `success` of the query set to `false`, and the `timestamp` of the request in ms
159
+
- A JSON response with the `tokens` available, `complexity` of the query, `depth` of the query, `success` of the query set to `false`, and the UNIX `timestamp` of the request
160
160
161
-
2. Successful Requests: successful request are passed onto the next function in the middleware chain with the following properties saved to `res.locals`
161
+
2.<b>Successful Requests</b>: successful requests are passed onto the next function in the middleware chain with the following properties saved to `res.locals`
162
162
163
163
```javascript
164
164
{
@@ -179,12 +179,13 @@ query { // 1
179
179
180
180
## <aname="future-development"></a> Future Development
181
181
182
-
-the ability to use this package with other caching technologies or libraries
183
-
-implement "resolve complexity analysis" for queries
184
-
-implement leaky bucket algorithm for rate-limiting
185
-
-experiment with performance improvements
182
+
-Ability to use this package with other caching technologies or libraries
183
+
-Implement "resolve complexity analysis" for queries
184
+
-Implement leaky bucket algorithm for rate-limiting
185
+
-Experiment with performance improvements
186
186
- caching optimization
187
-
- ensure connection pagination conventions can be accuratly acconuted for in comprlexity analysis
187
+
- Ensure connection pagination conventions can be accuratly acconuted for in complexity analysis
188
+
- Ability to use middleware with other server frameworks
0 commit comments