@@ -538,6 +538,75 @@ since it was last dispatched".
538538The above acts as a kind of minimum batch depth, with a time overload. It won't dispatch if the loader depth is less
539539than or equal to 10 but if 200ms pass it will dispatch.
540540
541+ ## Chaining DataLoader calls
542+
543+ It's natural to want to have chained ` DataLoader ` calls.
544+
545+ ``` java
546+ CompletableFuture<Object > chainedCalls = dataLoaderA. load(" user1" )
547+ .thenCompose(userAsKey - > dataLoaderB. load(userAsKey));
548+ ```
549+
550+ However, the challenge here is how to be efficient in batching terms.
551+
552+ This is discussed in detail in the https://github.com/graphql-java/java-dataloader/issues/54 issue.
553+
554+ Since CompletableFuture's are async and can complete at some time in the future, when is the best time to call
555+ ` dispatch ` again when a load call has completed to maximize batching?
556+
557+ The most naive approach is to immediately dispatch the second chained call as follows :
558+
559+ ``` java
560+ CompletableFuture<Object > chainedWithImmediateDispatch = dataLoaderA. load(" user1" )
561+ .thenCompose(userAsKey - > {
562+ CompletableFuture<Object > loadB = dataLoaderB. load(userAsKey);
563+ dataLoaderB. dispatch();
564+ return loadB;
565+ });
566+ ```
567+
568+ The above will work however the window of batching together multiple calls to ` dataLoaderB ` will be very small and since
569+ it will likely result in batch sizes of 1.
570+
571+ This is a very difficult problem to solve because you have to balance two competing design ideals which is to maximize the
572+ batching window of secondary calls in a small window of time so you customer requests don't take longer than necessary.
573+
574+ * If the batching window is wide you will maximize the number of keys presented to a ` BatchLoader ` but your request latency will increase.
575+
576+ * If the batching window is narrow you will reduce your request latency, but also you will reduce the number of keys presented to a ` BatchLoader ` .
577+
578+
579+ ### ScheduledDataLoaderRegistry ticker mode
580+
581+ The ` ScheduledDataLoaderRegistry ` offers one solution to this called "ticker mode" where it will continually reschedule secondary
582+ ` DataLoader ` calls after the initial ` dispatch() ` call is made.
583+
584+ The batch window of time is controlled by the schedule duration setup at when the ` ScheduledDataLoaderRegistry ` is created.
585+
586+ ``` java
587+ ScheduledExecutorService executorService = Executors . newSingleThreadScheduledExecutor();
588+
589+ ScheduledDataLoaderRegistry registry = ScheduledDataLoaderRegistry . newScheduledRegistry()
590+ .register(" a" , dataLoaderA)
591+ .register(" b" , dataLoaderB)
592+ .scheduledExecutorService(executorService)
593+ .schedule(Duration . ofMillis(10 ))
594+ .tickerMode(true ) // ticker mode is on
595+ .build();
596+
597+ CompletableFuture<Object > chainedCalls = dataLoaderA. load(" user1" )
598+ .thenCompose(userAsKey - > dataLoaderB. load(userAsKey));
599+
600+ ```
601+ When ticker mode is on the chained dataloader calls will complete but the batching window size will depend on how quickly
602+ the first level of ` DataLoader ` calls returned compared to the ` schedule ` of the ` ScheduledDataLoaderRegistry ` .
603+
604+ If you use ticker mode, then you MUST ` registry.close() ` on the ` ScheduledDataLoaderRegistry ` at the end of the request (say) otherwise
605+ it will continue to reschedule tasks to the ` ScheduledExecutorService ` associated with the registry.
606+
607+ You will want to look at sharing the ` ScheduledExecutorService ` in some way between requests when creating the ` ScheduledDataLoaderRegistry `
608+ otherwise you will be creating a thread per ` ScheduledDataLoaderRegistry ` instance created and with enough concurrent requests
609+ you may create too many threads.
541610
542611## Other information sources
543612
0 commit comments