Skip to content

Commit a2e0443

Browse files
committed
Document the creation of a super stream
1 parent 7a30758 commit a2e0443

File tree

1 file changed

+14
-2
lines changed

1 file changed

+14
-2
lines changed

src/docs/asciidoc/super-streams.adoc

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
[[super-streams]]
44
==== Super Streams (Partitioned Streams)
55

6-
WARNING: Super streams are an *experimental* feature, they are subject to change.
6+
WARNING: Super Streams require *RabbitMQ 3.11* or more.
77

88
A super stream is a logical stream made of several individual streams.
99
In essence, a super stream is a partitioned stream that brings scalability compared to a single stream.
@@ -58,6 +58,18 @@ When a super stream is in use, the stream Java client queries this information t
5858
From the application code point of view, using a super stream is mostly configuration-based.
5959
Some logic must also be provided to extract routing information from messages.
6060

61+
===== Super Stream Creation
62+
63+
It is possible to create the topology of a super stream with any AMQP 0.9.1 library or with the https://www.rabbitmq.com/management.html[management plugin], but the `rabbitmq-streams add_super_stream` command is a handy shortcut.
64+
Here is how to create an invoices super stream with 3 partitions:
65+
66+
.Creating a super stream from the CLI
67+
----
68+
rabbitmq-streams add_super_stream invoices --partitions 3
69+
----
70+
71+
Use `rabbitmq-streams add_super_stream --help` to learn more about the command.
72+
6173
[[super-stream-producer]]
6274
===== Publishing to a Super Stream
6375

@@ -404,4 +416,4 @@ The external store must be able to cope with the message rate in a real-world sc
404416
This way the broker will resume the dispatching at this location in the stream.
405417
* A well-behaved `ConsumerUpdateListener` must make sure the last processed offset is stored when the consumer becomes inactive, so that the consumer that will take over can look up the offset and resume consuming at the right location.
406418
Our `ConsumerUpdateListener` does not do anything when the consumer becomes inactive (it returns `null`): it can afford this because the offset is stored for each message.
407-
Make sure to store the last processed offset when the consumer becomes inactive to avoid duplicates when the consumption resumes elsewhere.
419+
Make sure to store the last processed offset when the consumer becomes inactive to avoid duplicates when the consumption resumes elsewhere.

0 commit comments

Comments
 (0)