diff --git a/.github/workflows/build-docs-gh-pages.yml b/.github/workflows/build-docs-gh-pages.yml
new file mode 100644
index 00000000000..dbd945432ca
--- /dev/null
+++ b/.github/workflows/build-docs-gh-pages.yml
@@ -0,0 +1,60 @@
+name: Build docs and upload ZIP
+
+on:
+ pull_request:
+ branches:
+ - 4.x
+ workflow_dispatch:
+
+jobs:
+ build-docs:
+ runs-on: ubuntu-latest
+ permissions:
+ contents: write
+
+ steps:
+ # 1. Checkout current branch
+ - name: Checkout current branch
+ uses: actions/checkout@v4
+
+ # 2. Java 8 (with Maven cache)
+ - name: Set up Java 8
+ uses: actions/setup-java@v4
+ with:
+ distribution: temurin
+ java-version: "8"
+ cache: maven # <--- cache Maven deps (~/.m2/repository)
+
+ # 3. Python 3.10.19
+ - name: Set up Python 3.10.19
+ uses: actions/setup-python@v5
+ with:
+ python-version: "3.10.19"
+
+ # 4. Install MkDocs dependencies
+ - name: Install MkDocs dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install \
+ mkdocs \
+ mkdocs-material \
+ mkdocs-awesome-pages-plugin \
+ mkdocs-macros-plugin
+
+ # 5. Build docs via build-doc.sh
+ - name: Build documentation
+ run: |
+ chmod +x ./build-doc.sh
+ ./build-doc.sh
+
+ # 6. Upload built docs as a ZIP artifact
+ - name: Archive built documentation
+ run: |
+ zip -r docs-built.zip docs
+
+ - name: Upload built doc artifact
+ uses: actions/upload-artifact@v4
+ with:
+ name: built-docs-zip
+ path: docs-built.zip
+ retention-days: 10
\ No newline at end of file
diff --git a/.github/workflows/update-docs-staging.yml b/.github/workflows/update-docs-staging.yml
new file mode 100644
index 00000000000..19d3260caf3
--- /dev/null
+++ b/.github/workflows/update-docs-staging.yml
@@ -0,0 +1,81 @@
+name: Update gh-pages-staging on commit
+
+on:
+ push:
+ branches:
+ - 4.x
+ workflow_dispatch:
+
+jobs:
+ build-docs:
+ runs-on: ubuntu-latest
+ permissions:
+ contents: write
+
+ steps:
+ # 1. Checkout doc branch
+ - name: Checkout current branch
+ uses: actions/checkout@v4
+ with:
+ path: java-driver
+
+ # 2. Java 8
+ - name: Set up Java 8
+ uses: actions/setup-java@v4
+ with:
+ distribution: temurin
+ java-version: "8"
+ cache: maven # <--- cache Maven deps (~/.m2/repository)
+
+ # 3. Python 3.10.19 + MkDocs + plugins
+ - name: Set up Python 3.10.19
+ uses: actions/setup-python@v5
+ with:
+ python-version: "3.10.19"
+
+ - name: Install MkDocs dependencies
+ run: |
+ python -m pip install --upgrade pip
+ pip install \
+ mkdocs \
+ mkdocs-material \
+ mkdocs-awesome-pages-plugin \
+ mkdocs-macros-plugin
+
+ # 4. Build docs via build-doc.sh
+ - name: Build documentation
+ working-directory: java-driver
+ run: |
+ chmod +x ./build-doc.sh
+ ./build-doc.sh
+
+
+ # 6. Checkout gh-pages-staging branch
+ - name: Checkout gh-pages-staging
+ uses: actions/checkout@v4
+ with:
+ ref: gh-pages-staging
+ path: gh-pages-staging
+
+ - name: Copy and Build Doc
+ working-directory: gh-pages-staging
+ run: |
+ cp -r ../java-driver/docs ./docs/latest
+ mkdocs build
+ cp -r ./out/. . # because the index.html has to be in the root folder
+
+ - name: Commit and push to gh-pages-staging
+ working-directory: gh-pages-staging
+ run: |
+ git config --global user.email "gha@cassandra.apache.org"
+ git config --global user.name "GHA for Apache Cassandra Website"
+
+ git add .
+
+ if git diff --cached --quiet; then
+ echo "No changes to push to gh-pages."
+ exit 0
+ fi
+
+ git commit -m "Update generated docs"
+ git push origin gh-pages-staging
diff --git a/README.md b/README.md
index 0f6c2bb5a6f..a2f9c16f68c 100644
--- a/README.md
+++ b/README.md
@@ -45,8 +45,8 @@ are multiple modules, all prefixed with `java-driver-`.
Note that the query builder is now published as a separate artifact, you'll need to add the
dependency if you plan to use it.
-Refer to each module's manual for more details ([core](manual/core/), [query
-builder](manual/query_builder/), [mapper](manual/mapper)).
+Refer to each module's manual for more details ([core](manual/core/README.md), [query
+builder](manual/query_builder/README.md), [mapper](manual/mapper/README.md)).
[org.apache.cassandra]: http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.cassandra%22
@@ -65,7 +65,7 @@ but DataStax does not officially support these systems.
Java Driver 4 is **not binary compatible** with previous versions. However, most of the concepts
remain unchanged, and the new API will look very familiar to 2.x and 3.x users.
-See the [upgrade guide](upgrade_guide/) for details.
+See the [upgrade guide](upgrade_guide/README.md) for details.
## Error Handling
@@ -73,7 +73,7 @@ See the [Cassandra error handling done right blog](https://www.datastax.com/blog
## Useful links
-* [Manual](manual/)
+* [Manual](manual/README.md)
* [API docs]
* Bug tracking: [JIRA]
* [Mailing list]
@@ -83,8 +83,8 @@ See the [Cassandra error handling done right blog](https://www.datastax.com/blog
[API docs]: https://docs.datastax.com/en/drivers/java/4.17
[JIRA]: https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSJAVA%20ORDER%20BY%20key%20DESC
[Mailing list]: https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user
-[Changelog]: changelog/
-[FAQ]: faq/
+[Changelog]: changelog/README.md
+[FAQ]: faq/README.md
## License
diff --git a/build-doc.sh b/build-doc.sh
new file mode 100755
index 00000000000..3be00f379b9
--- /dev/null
+++ b/build-doc.sh
@@ -0,0 +1,16 @@
+set -e
+# Set up python environment
+#pyenv local 3.10.1
+#pip install mkdocs mkdocs-material mkdocs-awesome-pages-plugin mkdocs-macros-plugin
+
+# In some bash/zsh environments, the locale is not set correctly, which causes mkdocs to fail.
+export LC_ALL=en_US.UTF-8
+export LANG=en_US.UTF-8
+
+# Build Javadoc
+mvn clean install -DskipTests # or guava-shaded can not be found
+# mvn javadoc:javadoc -pl core,query-builder,mapper-runtime
+mvn javadoc:aggregate
+
+# Build manual with API references
+mkdocs build -s # strict, so it fails with warning. You can also use `mkdocs serve` to preview
diff --git a/core/src/main/java/com/datastax/oss/driver/api/core/config/DefaultDriverOption.java b/core/src/main/java/com/datastax/oss/driver/api/core/config/DefaultDriverOption.java
index 60c44193577..9776090a22f 100644
--- a/core/src/main/java/com/datastax/oss/driver/api/core/config/DefaultDriverOption.java
+++ b/core/src/main/java/com/datastax/oss/driver/api/core/config/DefaultDriverOption.java
@@ -1022,8 +1022,8 @@ public enum DefaultDriverOption implements DriverOption {
* }
*
*
- * Note: subnets must be represented as prefix blocks, see {@link
- * inet.ipaddr.Address#isPrefixBlock()}.
+ * Note: subnets must be represented as prefix blocks, see inet.ipaddr.Address.isPrefixBlock()
*
*
Value type: {@link java.util.Map Map}<{@link String},{@link String}>
*/
diff --git a/core/src/main/java/com/datastax/oss/driver/api/core/data/CqlVector.java b/core/src/main/java/com/datastax/oss/driver/api/core/data/CqlVector.java
index 8089d551750..da645c02400 100644
--- a/core/src/main/java/com/datastax/oss/driver/api/core/data/CqlVector.java
+++ b/core/src/main/java/com/datastax/oss/driver/api/core/data/CqlVector.java
@@ -80,7 +80,7 @@ public static CqlVector newInstance(List list) {
* Create a new CqlVector instance from the specified string representation.
*
* @param str a String representation of a CqlVector
- * @param subtypeCodec
+ * @param subtypeCodec the codec to use to parse the individual elements
* @return a new CqlVector built from the String representation
*/
public static CqlVector from(@NonNull String str, @NonNull TypeCodec subtypeCodec) {
diff --git a/faq/README.md b/faq/README.md
index 97cb4decd00..1fe06171ddc 100644
--- a/faq/README.md
+++ b/faq/README.md
@@ -65,7 +65,7 @@ At any rate, `CompletionStage` has a `toCompletableFuture()` method. In current
### Where is `DowngradingConsistencyRetryPolicy` from driver 3?
**As of driver 4.10, this retry policy was made available again as a built-in alternative to the
-default retry policy**: see the [manual](../manual/core/retries) to understand how to use it.
+default retry policy**: see the [manual](../manual//core/retries/README.md) to understand how to use it.
For versions between 4.0 and 4.9 inclusive, there is no built-in equivalent of driver 3
`DowngradingConsistencyRetryPolicy`.
@@ -100,7 +100,7 @@ This ability is considered a misfeature and has been removed from driver 4.0 onw
However, due to popular demand, cross-datacenter failover has been brought back to driver 4 in
version 4.10.0.
-If you are using a driver version >= 4.10.0, read the [manual](../manual/core/loadbalancing/) to
+If you are using a driver version >= 4.10.0, read the [manual](../manual//core/load_balancing/README.md) to
understand how to enable this feature; for driver versions < 4.10.0, this feature is simply not
available.
@@ -109,7 +109,7 @@ available.
The driver now uses Java 8's improved date and time API. CQL type `timestamp` is mapped to
`java.time.Instant`, and the corresponding getter and setter are `getInstant` and `setInstant`.
-See [Temporal types](../manual/core/temporal_types/) for more details.
+See [Temporal types](../manual//core/temporal_types/README.md) for more details.
### Why do DDL queries have a higher latency than driver 3?
@@ -119,6 +119,6 @@ noticeably higher latency than driver 3 (about 1 second).
This is because those queries are now *debounced*: the driver adds a short wait in an attempt to
group multiple schema changes into a single metadata refresh. If you want to mitigate this, you can
either adjust the debouncing settings, or group your schema updates while temporarily disabling the
-metadata; see the [performance](../manual/core/performance/#debouncing) page.
+metadata; see the [performance](../manual//core/performance/README.md#debouncing) page.
This only applies to DDL queries; DML statements (`SELECT`, `INSERT`...) are not debounced.
diff --git a/manual/README.md b/manual/README.md
index 049ddc8c8e9..71e2104d37c 100644
--- a/manual/README.md
+++ b/manual/README.md
@@ -21,16 +21,16 @@ under the License.
Driver modules:
-* [Core](core/): the main entry point, deals with connectivity and query execution.
-* [Query builder](query_builder/): a fluent API to create CQL queries programmatically.
-* [Mapper](mapper/): generates the boilerplate to execute queries and convert the results into
+* [Core](core/README.md): the main entry point, deals with connectivity and query execution.
+* [Query builder](query_builder/README.md): a fluent API to create CQL queries programmatically.
+* [Mapper](mapper/README.md): generates the boilerplate to execute queries and convert the results into
application-level objects.
-* [Developer docs](developer/): explains the codebase and internal extension points for advanced
+* [Developer docs](developer/README.md): explains the codebase and internal extension points for advanced
customization.
Common topics:
-* [API conventions](api_conventions/)
-* [Case sensitivity](case_sensitivity/)
-* [OSGi](osgi/)
-* [Cloud](cloud/)
+* [API conventions](api_conventions/README.md)
+* [Case sensitivity](case_sensitivity/README.md)
+* [OSGi](osgi/README.md)
+* [Cloud](cloud/README.md)
diff --git a/manual/cloud/README.md b/manual/cloud/README.md
index 9116b03dac3..79b6bee4f51 100644
--- a/manual/cloud/README.md
+++ b/manual/cloud/README.md
@@ -146,5 +146,5 @@ public class Main {
[Create an Astra database - AWS/Azure/GCP]: https://docs.datastax.com/en/astra/docs/creating-your-astra-database.html
[Access an Astra database - AWS/Azure/GCP]: https://docs.datastax.com/en/astra/docs/obtaining-database-credentials.html#_sharing_your_secure_connect_bundle
[Download the secure connect bundle - AWS/Azure/GCP]: https://docs.datastax.com/en/astra/docs/obtaining-database-credentials.html
-[minimal project structure]: ../core/integration/#minimal-project-structure
-[driver documentation]: ../core/configuration/
+[minimal project structure]: ../core/integration/README.md#minimal-project-structure
+[driver documentation]: ../core/configuration/README.md
diff --git a/manual/core/README.md b/manual/core/README.md
index 5ca4cd7872f..b706eda5ee3 100644
--- a/manual/core/README.md
+++ b/manual/core/README.md
@@ -30,7 +30,7 @@ following coordinates:
```
-(For more details on setting up your build tool, see the [integration](integration/) page.)
+(For more details on setting up your build tool, see the [integration](integration/README.md) page.)
### Quick start
@@ -69,14 +69,14 @@ variants that return a `CompletionStage`).
[CqlSession#builder()] provides a fluent API to create an instance programmatically. Most of the
customization is done through the driver configuration (refer to the
-[corresponding section](configuration/) of this manual for full details).
+[corresponding section](configuration/README.md) of this manual for full details).
-We recommend that you take a look at the [reference configuration](configuration/reference/) for the
+We recommend that you take a look at the [reference configuration](configuration/reference/README.md) for the
list of available options, and cross-reference with the sub-sections in this manual for more
explanations.
By default, `CqlSession.builder().build()` fails immediately if the cluster is not available. If you
-want to retry instead, you can set the [reconnect-on-init](reconnection/#at-init-time) option in the
+want to retry instead, you can set the [reconnect-on-init](reconnection/README.md#at-init-time) option in the
configuration.
##### Contact points
@@ -91,7 +91,7 @@ This is fine for a quick start on a developer workstation, but you'll quickly wa
specific addresses. There are two ways to do this:
* via [SessionBuilder.addContactPoint()] or [SessionBuilder.addContactPoints()];
-* in the [configuration](configuration/) via the `basic.contact-points` option.
+* in the [configuration](configuration/README.md) via the `basic.contact-points` option.
As soon as there are explicit contact points, you also need to provide the name of the local
datacenter. All contact points must belong to it (as reported in their system tables:
@@ -123,7 +123,7 @@ datastax-java-driver {
```
For more details about the local datacenter, refer to the [load balancing
-policy](load_balancing/#local-only) section.
+policy](load_balancing/README.md#datacenter-locality) section.
##### Keyspace
@@ -135,7 +135,7 @@ session.execute("SELECT * FROM my_keyspace.my_table WHERE id = 1");
```
You can also specify a keyspace at construction time, either through the
-[configuration](configuration/):
+[configuration](configuration/README.md):
```
datastax-java-driver {
@@ -203,7 +203,7 @@ ResultSet rs = session.execute("SELECT release_version FROM system.local");
```
As shown here, the simplest form is to pass a query string directly. You can also pass a
-[Statement](statements/) instance.
+[Statement](statements/README.md) instance.
#### Processing rows
@@ -218,7 +218,7 @@ for (Row row : rs) {
This will return **all results** without limit (even though the driver might use multiple queries in
the background). To handle large result sets, you might want to use a `LIMIT` clause in your CQL
-query, or use one of the techniques described in the [paging](paging/) documentation.
+query, or use one of the techniques described in the [paging](paging/README.md) documentation.
When you know that there is only one row (or are only interested in the first one), the driver
provides a convenience method:
@@ -257,10 +257,10 @@ See [AccessibleByName] for an explanation of the conversion rules.
| blob | getByteBuffer | java.nio.ByteBuffer | |
| boolean | getBoolean | boolean | |
| counter | getLong | long | |
-| date | getLocalDate | java.time.LocalDate | [Temporal types](temporal_types/) |
+| date | getLocalDate | java.time.LocalDate | [Temporal types](temporal_types/README.md) |
| decimal | getBigDecimal | java.math.BigDecimal | |
| double | getDouble | double | |
-| duration | getCqlDuration | [CqlDuration] | [Temporal types](temporal_types/) |
+| duration | getCqlDuration | [CqlDuration] | [Temporal types](temporal_types/README.md) |
| float | getFloat | float | |
| inet | getInetAddress | java.net.InetAddress | |
| int | getInt | int | |
@@ -269,19 +269,19 @@ See [AccessibleByName] for an explanation of the conversion rules.
| set | getSet | java.util.Set | |
| smallint | getShort | short | |
| text | getString | java.lang.String | |
-| time | getLocalTime | java.time.LocalTime | [Temporal types](temporal_types/) |
-| timestamp | getInstant | java.time.Instant | [Temporal types](temporal_types/) |
+| time | getLocalTime | java.time.LocalTime | [Temporal types](temporal_types/README.md) |
+| timestamp | getInstant | java.time.Instant | [Temporal types](temporal_types/README.md) |
| timeuuid | getUuid | java.util.UUID | |
| tinyint | getByte | byte | |
-| tuple | getTupleValue | [TupleValue] | [Tuples](tuples/) |
-| user-defined types | getUDTValue | [UDTValue] | [User-defined types](udts/) |
+| tuple | getTupleValue | [TupleValue] | [Tuples](tuples/README.md) |
+| user-defined types | getUDTValue | [UDTValue] | [User-defined types](udts/README.md) |
| uuid | getUuid | java.util.UUID | |
| varchar | getString | java.lang.String | |
| varint | getBigInteger | java.math.BigInteger | |
-| vector | getVector | [CqlVector] | [Custom Codecs](custom_codecs/) |
+| vector | getVector | [CqlVector] | [Custom Codecs](custom_codecs/README.md) |
Sometimes the driver has to infer a CQL type from a Java type (for example when handling the values
-of [simple statements](statements/simple/)); for those that have multiple CQL equivalents, it makes
+of [simple statements](statements/simple/README.md)); for those that have multiple CQL equivalents, it makes
the following choices:
* `java.lang.String`: `text`
@@ -289,7 +289,7 @@ the following choices:
* `java.util.UUID`: `uuid`
In addition to these default mappings, you can register your own types with
-[custom codecs](custom_codecs/).
+[custom codecs](custom_codecs/README.md).
##### Primitive types
diff --git a/manual/core/address_resolution/README.md b/manual/core/address_resolution/README.md
index 5b2536feb18..ad983f7275a 100644
--- a/manual/core/address_resolution/README.md
+++ b/manual/core/address_resolution/README.md
@@ -109,7 +109,7 @@ public class MyAddressTranslator implements AddressTranslator {
}
```
-Then reference this class from the [configuration](../configuration/):
+Then reference this class from the [configuration](../configuration/README.md):
```
datastax-java-driver.advanced.address-translator.class = com.mycompany.MyAddressTranslator
@@ -125,7 +125,7 @@ nodes are exposed via one hostname pointing to AWS Endpoint), you can configure
`FixedHostNameAddressTranslator` to always translate all node addresses to that same proxy hostname, no matter what IP
address a node has but still using its native transport port.
-To use it, specify the following in the [configuration](../configuration):
+To use it, specify the following in the [configuration](../configuration/README.md):
```
datastax-java-driver.advanced.address-translator.class = FixedHostNameAddressTranslator
@@ -146,7 +146,7 @@ datacenter that node belongs to by checking its IP address against the given dat
For such scenarios you can use `SubnetAddressTranslator` to translate node IPs to the datacenter proxy address
associated with it.
-To use it, specify the following in the [configuration](../configuration):
+To use it, specify the following in the [configuration](../configuration/README.md):
```
datastax-java-driver.advanced.address-translator {
class = SubnetAddressTranslator
@@ -176,7 +176,7 @@ However, this is not always the most cost-effective: if a client and a node are
to connect over the private IP. Ideally, you'd want to pick the best address in each case.
The driver provides `Ec2MultiRegionAddressTranslator` which does exactly that. To use it, specify the following in
-the [configuration](../configuration/):
+the [configuration](../configuration/README.md):
```
datastax-java-driver.advanced.address-translator.class = Ec2MultiRegionAddressTranslator
diff --git a/manual/core/async/README.md b/manual/core/async/README.md
index 5b4bac3dccf..1f69e39727a 100644
--- a/manual/core/async/README.md
+++ b/manual/core/async/README.md
@@ -64,7 +64,7 @@ resultStage.whenComplete(
The driver uses two internal thread pools: one for request I/O and one for administrative tasks
(such as metadata refreshes, schema agreement or processing server events). Note that you can
control the size of these pools with the `advanced.netty` options in the
-[configuration](../configuration).
+[configuration](../configuration/README.md).
When you register a callback on a completion stage, it will execute on a thread in the corresponding
pool:
@@ -82,7 +82,7 @@ resultStage.thenAccept(resultSet -> System.out.println(Thread.currentThread().ge
As long as you use the asynchronous API, the driver will behave in a non-blocking manner: its
internal threads will almost never block. There are a few exceptions to the rule though: see the
-manual page on [non-blocking programming](../non_blocking) for details.
+manual page on [non-blocking programming](../non_blocking/README.md) for details.
Because the asynchronous API is non-blocking, you can safely call a driver method from inside a
callback, even when the callback's execution is triggered by a future returned by the driver:
@@ -221,7 +221,7 @@ be handled anywhere. Either add a `try/catch` block in the callback, or don't ig
Unlike previous versions of the driver, the asynchronous API never triggers synchronous behavior,
even when iterating through the results of a request. `session.executeAsync` returns a dedicated
[AsyncResultSet] that only iterates the current page, the next pages must be fetched explicitly.
-This greatly simplifies asynchronous paging; see the [paging](../paging/#asynchronous-paging)
+This greatly simplifies asynchronous paging; see the [paging](../paging/README.md#asynchronous-paging)
documentation for more details and an example.
[CompletionStage]: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/CompletionStage.html
diff --git a/manual/core/authentication/README.md b/manual/core/authentication/README.md
index 516e47f558f..ae3536ba2c6 100644
--- a/manual/core/authentication/README.md
+++ b/manual/core/authentication/README.md
@@ -36,7 +36,7 @@ This can be done in two ways:
### In the configuration
-Define an `auth-provider` section in the [configuration](../configuration/):
+Define an `auth-provider` section in the [configuration](../configuration/README.md):
```
datastax-java-driver {
@@ -255,4 +255,4 @@ session.execute(statement);
[ProxyAuthentication.executeAs]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/dse/driver/api/core/auth/ProxyAuthentication.html#executeAs-java.lang.String-StatementT-
[SessionBuilder.withAuthCredentials]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/session/SessionBuilder.html#withAuthCredentials-java.lang.String-java.lang.String-
[SessionBuilder.withAuthProvider]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/session/SessionBuilder.html#withAuthProvider-com.datastax.oss.driver.api.core.auth.AuthProvider-
-[reference.conf]: ../configuration/reference/
+[reference.conf]: ../configuration/reference/README.md
diff --git a/manual/core/bom/README.md b/manual/core/bom/README.md
index 235edcf632c..c5307082189 100644
--- a/manual/core/bom/README.md
+++ b/manual/core/bom/README.md
@@ -60,8 +60,8 @@ in its POM. The driver artifacts are always in sync, however they were pulled in
### BOM and mapper processor
-If you are using the driver's [object mapper](../../mapper), our recommendation is to declare the
-mapper processor in the [annotationProcessorPaths](../../mapper/config/#maven) section of the
+If you are using the driver's [object mapper](../../mapper/README.md), our recommendation is to declare the
+mapper processor in the [annotationProcessorPaths](../../mapper/config/README.md#maven) section of the
compiler plugin configuration. Unfortunately, `` versions don't work there,
this is a known Maven issue ([MCOMPILER-391]).
diff --git a/manual/core/compression/README.md b/manual/core/compression/README.md
index 9e84fde917d..eb95153a3af 100644
--- a/manual/core/compression/README.md
+++ b/manual/core/compression/README.md
@@ -36,7 +36,7 @@ you have larger payloads, such as:
* requests with many values, or very large values;
* responses with many rows, or many columns per row, or very large columns.
-To enable compression, set the following option in the [configuration](../configuration):
+To enable compression, set the following option in the [configuration](../configuration/README.md):
```
datastax-java-driver {
@@ -55,7 +55,7 @@ better performance and compression ratios over Snappy.
Both implementations rely on third-party libraries, declared by the driver as *optional*
dependencies; if you enable compression, you need to explicitly depend on the corresponding library
to pull it into your project (see the [Integration>Driver
-dependencies](../integration/#driver-dependencies) section for more details).
+dependencies](../integration/README.md#driver-dependencies) section for more details).
### LZ4
@@ -78,7 +78,7 @@ LZ4-java has three internal implementations (from fastest to slowest):
* pure Java using only "safe" classes.
It will pick the best implementation depending on what's possible on your platform. To find out
-which one was chosen, [enable INFO logs](../logging/) on the category
+which one was chosen, [enable INFO logs](../logging/README.md) on the category
`com.datastax.oss.driver.internal.core.protocol.Lz4Compressor` and look for the following message:
```
@@ -97,7 +97,7 @@ Dependency:
```
-**Important: Snappy is not supported when building a [GraalVM native image](../graalvm).**
+**Important: Snappy is not supported when building a [GraalVM native image](../graalvm/README.md).**
Always double-check the exact Snappy version needed; you can find it in the driver's [parent POM].
diff --git a/manual/core/configuration/README.md b/manual/core/configuration/README.md
index deefadbe3d4..7e5b3da2e49 100644
--- a/manual/core/configuration/README.md
+++ b/manual/core/configuration/README.md
@@ -552,6 +552,6 @@ config.getDefaultProfile().getInt(MyCustomOption.AWESOMENESS_FACTOR);
[Typesafe Config]: https://github.com/typesafehub/config
[config standard behavior]: https://github.com/typesafehub/config#standard-behavior
-[reference.conf]: reference/
+[reference.conf]: ./reference/README.md
[HOCON]: https://github.com/typesafehub/config/blob/master/HOCON.md
-[API conventions]: ../../api_conventions
+[API conventions]: ../../api_conventions/README.md
diff --git a/manual/core/configuration/reference/README.md b/manual/core/configuration/reference/README.md
new file mode 100644
index 00000000000..b86ac9a46ce
--- /dev/null
+++ b/manual/core/configuration/reference/README.md
@@ -0,0 +1,32 @@
+
+
+## Reference configuration
+
+The following is a copy of the ``reference.conf`` file matching the version of this documentation.
+It is packaged in the ``java-driver-core`` JAR artifact, and used at runtime to provide the default
+values for all configuration options (in the sources, it can be found under
+``core/src/main/resources``).
+
+See the [configuration](../README.md) page for more explanations.
+
+```conf
+{% include 'manual/core/configuration/reference/reference.conf' %}
+```
+
diff --git a/manual/core/configuration/reference/README.rst b/manual/core/configuration/reference/README.rst
deleted file mode 100644
index d4989ecf641..00000000000
--- a/manual/core/configuration/reference/README.rst
+++ /dev/null
@@ -1,34 +0,0 @@
-..
- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements. See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership. The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing,
- software distributed under the License is distributed on an
- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied. See the License for the
- specific language governing permissions and limitations
- under the License.
-
-Reference configuration
------------------------
-
-The following is a copy of the ``reference.conf`` file matching the version of this documentation.
-It is packaged in the ``java-driver-core`` JAR artifact, and used at runtime to provide the default
-values for all configuration options (in the sources, it can be found under
-``core/src/main/resources``).
-
-See the `configuration page <../>`_ for more explanations.
-
-.. raw:: html
-
-
-
-.. include:: core/src/main/resources/reference.conf
- :code: properties
diff --git a/manual/core/configuration/reference/reference.conf b/manual/core/configuration/reference/reference.conf
new file mode 120000
index 00000000000..bbfcb9bf3e4
--- /dev/null
+++ b/manual/core/configuration/reference/reference.conf
@@ -0,0 +1 @@
+../../../../core/src/main/resources/reference.conf
\ No newline at end of file
diff --git a/manual/core/control_connection/README.md b/manual/core/control_connection/README.md
index 38544797aed..a7dc94fbc4d 100644
--- a/manual/core/control_connection/README.md
+++ b/manual/core/control_connection/README.md
@@ -21,25 +21,25 @@ under the License.
The control connection is a dedicated connection used for administrative tasks:
-* querying system tables to learn about the cluster's [topology](../metadata/node/) and
- [schema](../metadata/schema/);
-* checking [schema agreement](../metadata/schema/#schema-agreement);
+* querying system tables to learn about the cluster's [topology](../metadata/node/README.md) and
+ [schema](../metadata/schema/README.md);
+* checking [schema agreement](../metadata/schema/README.md#schema-agreement);
* reacting to server events, which are used to notify the driver of external topology or schema
changes.
When the driver starts, the control connection is established to the first contacted node. If that
-node goes down, a [reconnection](../reconnection/) is started to find another node; it is governed
+node goes down, a [reconnection](../reconnection/README.md) is started to find another node; it is governed
by the same policy as regular connections (`advanced.reconnection-policy` options in the
-[configuration](../configuration/)), and tries the nodes according to a query plan from the
-[load balancing policy](../load_balancing/).
+[configuration](../configuration/README.md)), and tries the nodes according to a query plan from the
+[load balancing policy](../load_balancing/README.md).
-The control connection is managed independently from [regular pooled connections](../pooling/), and
+The control connection is managed independently from [regular pooled connections](../pooling/README.md), and
used exclusively for administrative requests. It shows up in [Node.getOpenConnections], as well as
-the `pool.open-connections` [metric](../metrics); for example, if you've configured a pool size of
+the `pool.open-connections` [metric](../metrics/README.md); for example, if you've configured a pool size of
2, the control node will show 3 connections.
There are a few options to fine tune the control connection behavior in the
-`advanced.control-connection` and `advanced.metadata` sections; see the [metadata](../metadata/)
-pages and the [reference configuration](../configuration/reference/) for all the details.
+`advanced.control-connection` and `advanced.metadata` sections; see the [metadata](../metadata/README.md)
+pages and the [reference configuration](../configuration/reference/README.md) for all the details.
[Node.getOpenConnections]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/metadata/Node.html#getOpenConnections--
diff --git a/manual/core/custom_codecs/README.md b/manual/core/custom_codecs/README.md
index f3b7be1e3d9..b1b1eaed044 100644
--- a/manual/core/custom_codecs/README.md
+++ b/manual/core/custom_codecs/README.md
@@ -39,7 +39,7 @@ Define custom Java to CQL mappings.
-----
-Out of the box, the driver comes with [default CQL to Java mappings](../#cql-to-java-type-mapping).
+Out of the box, the driver comes with [default CQL to Java mappings](../README.md#cql-to-java-type-mapping).
For example, if you read a CQL `text` column, it is mapped to its natural counterpart
`java.lang.String`:
@@ -246,7 +246,7 @@ be of type `int`.
If you really want to use integer codes for storage efficiency, implement an explicit mapping
(for example with a `toCode()` method on your enum type). It is then fairly straightforward to
- implement a codec with [MappingCodec](#creating-custom-java-to-cql-mappings-with-mapping-codec),
+ implement a codec with [MappingCodec](#creating-custom-java-to-cql-mappings-with-mappingcodec),
using `TypeCodecs#INT` as the "inner" codec.
For example, assuming the following enum:
@@ -624,7 +624,7 @@ Coordinates coordinates = row.get("coordinates", Coordinates.class);
```
Note: if you need even more advanced mapping capabilities, consider adopting
-the driver's [object mapping framework](../../mapper/).
+the driver's [object mapping framework](../../mapper/README.md).
### Subtype polymorphism
diff --git a/manual/core/detachable_types/README.md b/manual/core/detachable_types/README.md
index 7968835dd8a..b9c433bc428 100644
--- a/manual/core/detachable_types/README.md
+++ b/manual/core/detachable_types/README.md
@@ -31,7 +31,7 @@ specific circumstances, they can lose that reference, and you might need to reat
Namely, these components are:
-* all [DataType] instances, in particular [tuples](../tuples/) and [UDTs](../udts/);
+* all [DataType] instances, in particular [tuples](../tuples/README.md) and [UDTs](../udts/README.md);
* [result rows][Row], and their [column definitions][ColumnDefinition].
Detachable types are an advanced topic, that should only be a concern for 3rd-party tool developers.
@@ -41,7 +41,7 @@ them. See the [bottom line](#bottom-line) at the end of this page for details.
### Rationale
Detachable components are those that encode or decode their fields themselves. For example, when you
-set a field on a [tuple value](../tuples):
+set a field on a [tuple value](../tuples/README.md):
```java
tupleValue = tupleValue.setString(0, "foo");
@@ -53,8 +53,8 @@ reuse the tuple instance in multiple requests.
Encoding requires session-specific information:
-* the [CodecRegistry] instance (in case it contains [custom codecs](../custom_codecs/));
-* the [protocol version](../native_protocol/) (because the binary format can change across
+* the [CodecRegistry] instance (in case it contains [custom codecs](../custom_codecs/README.md));
+* the [protocol version](../native_protocol/README.md) (because the binary format can change across
versions).
Therefore the tuple value needs a reference to the session to access those two objects.
@@ -83,7 +83,7 @@ There is no way to detach an object explicitly. This can only happen when:
* deserializing a previously serialized instance (we're referring here to [Java serialization]);
* attaching an object to another session;
-* creating a [tuple](../tuples/) or [UDT](../udts/) definition manually:
+* creating a [tuple](../tuples/README.md) or [UDT](../udts/README.md) definition manually:
```java
TupleType tupleType = DataTypes.tupleOf(DataTypes.INT, DataTypes.TEXT, DataTypes.FLOAT);
@@ -148,7 +148,7 @@ create tuple or UDT types manually.
Even then, the defaults used by detached objects might be good enough for you:
-* the default codec registry works if you don't have any [custom codec](../custom_codecs/);
+* the default codec registry works if you don't have any [custom codec](../custom_codecs/README.md);
* the binary encoding format is stable across modern protocol versions. The last changes were for
collection encoding from v2 to v3; Java Driver 4 only supports v3 and above. When in doubt, check
the "Changes" section of the [protocol specifications].
diff --git a/manual/core/dse/README.md b/manual/core/dse/README.md
index 75abeafb3d7..8617c3952a3 100644
--- a/manual/core/dse/README.md
+++ b/manual/core/dse/README.md
@@ -21,10 +21,10 @@ under the License.
Some driver features only work with DataStax Enterprise:
-* [Graph](graph/);
-* [Geospatial types](geotypes/);
-* Proxy and GSSAPI authentication (covered in the [Authentication](../authentication/) page).
+* [Graph](graph/README.md);
+* [Geospatial types](geotypes/README.md);
+* Proxy and GSSAPI authentication (covered in the [Authentication](../authentication/README.md) page).
Note that, if you don't use these features, you might be able to exclude certain dependencies in
order to limit the number of JARs in your classpath. See the
-[Integration](../integration/#driver-dependencies) page.
+[Integration](../integration/README.md#driver-dependencies) page.
diff --git a/manual/core/dse/geotypes/README.md b/manual/core/dse/geotypes/README.md
index eb414de4f8d..dc0fa40eec6 100644
--- a/manual/core/dse/geotypes/README.md
+++ b/manual/core/dse/geotypes/README.md
@@ -25,7 +25,7 @@ The driver comes with client-side representations of the DSE geospatial data typ
Note: geospatial types require the [ESRI] library version 1.2 to be present on the classpath. The
DSE driver has a non-optional dependency on that library, but if your application does not use
geotypes at all, it is possible to exclude it to minimize the number of runtime dependencies (see
-the [Integration>Driver dependencies](../../integration/#driver-dependencies) section for
+the [Integration>Driver dependencies](../../integration/README.md#driver-dependencies) section for
more details). If the library cannot be found at runtime, geospatial types won't be available and a
warning will be logged, but the driver will otherwise operate normally (this is also valid for OSGi
deployments).
diff --git a/manual/core/dse/graph/README.md b/manual/core/dse/graph/README.md
index 6bcacd44c4e..398c01ec4fb 100644
--- a/manual/core/dse/graph/README.md
+++ b/manual/core/dse/graph/README.md
@@ -29,7 +29,7 @@ modeling, refer to the [DSE developer guide].*
Note: graph capabilities require the [Apache TinkerPop™] library to be present on the classpath. The
driver has a non-optional dependency on that library, but if your application does not use graph at
all, it is possible to exclude it to minimize the number of runtime dependencies (see the
-[Integration>Driver dependencies](../../integration/#driver-dependencies) section for more
+[Integration>Driver dependencies](../../integration/README.md#driver-dependencies) section for more
details). If the library cannot be found at runtime, graph queries won't be available and a warning
will be logged, but the driver will otherwise operate normally (this is also valid for OSGi
deployments).
@@ -44,7 +44,7 @@ your application, let the driver pull it transitively.
There are 3 ways to execute graph requests:
1. Passing a Gremlin script directly in a plain Java string. We'll refer to this as the
- [script API](script/):
+ [script API](script/README.md):
```java
CqlSession session = CqlSession.builder().build();
@@ -61,8 +61,8 @@ There are 3 ways to execute graph requests:
}
```
-2. Building a traversal with the [TinkerPop fluent API](fluent/), and [executing it
- explicitly](fluent/explicit/) with the session:
+2. Building a traversal with the [TinkerPop fluent API](fluent/README.md), and [executing it
+ explicitly](fluent/explicit/README.md) with the session:
```java
import static com.datastax.dse.driver.api.core.graph.DseGraph.g;
@@ -77,7 +77,7 @@ There are 3 ways to execute graph requests:
```
3. Building a connected traversal with the fluent API, and [executing it
- implicitly](fluent/implicit/) by invoking a terminal step:
+ implicitly](fluent/implicit/README.md) by invoking a terminal step:
```java
GraphTraversalSource g = DseGraph.g
@@ -86,9 +86,9 @@ There are 3 ways to execute graph requests:
List vertices = g.V().has("name", "marko").toList();
```
-All executions modes rely on the same set of [configuration options](options/).
+All executions modes rely on the same set of [configuration options](options/README.md).
-The script and explicit fluent API return driver-specific [result sets](results/). The implicit
+The script and explicit fluent API return driver-specific [result sets](results/README.md). The implicit
fluent API returns Apache TinkerPop™ types directly.
[Apache TinkerPop™]: http://tinkerpop.apache.org/
diff --git a/manual/core/dse/graph/fluent/README.md b/manual/core/dse/graph/fluent/README.md
index c1645fdb234..4f824c9b586 100644
--- a/manual/core/dse/graph/fluent/README.md
+++ b/manual/core/dse/graph/fluent/README.md
@@ -34,9 +34,9 @@ GraphTraversal traversal = g.V().has("name", "marko");
There are two ways to execute fluent traversals:
-* [explicitly](explicit/) by wrapping a traversal into a statement and passing it to
+* [explicitly](explicit/README.md) by wrapping a traversal into a statement and passing it to
`session.execute`;
-* [implicitly](implicit/) by building the traversal from a connected source, and calling a
+* [implicitly](implicit/README.md) by building the traversal from a connected source, and calling a
terminal step.
### Common topics
@@ -52,7 +52,7 @@ fluent API:
* configuration;
* DSE graph schema queries.
-You'll have to use the [script API](../script) for those use cases.
+You'll have to use the [script API](../script/README.md) for those use cases.
#### Performance considerations
diff --git a/manual/core/dse/graph/fluent/explicit/README.md b/manual/core/dse/graph/fluent/explicit/README.md
index 163180a4a8a..caa6048a884 100644
--- a/manual/core/dse/graph/fluent/explicit/README.md
+++ b/manual/core/dse/graph/fluent/explicit/README.md
@@ -42,7 +42,7 @@ for (GraphNode node : result) {
As shown above, [FluentGraphStatement.newInstance] creates a statement from a traversal directly.
The default implementation returned by the driver is **immutable**; if you call additional methods
-on the statement -- for example to set [options](../../options/) -- each method call will create a
+on the statement -- for example to set [options](../../options/README.md) -- each method call will create a
new copy:
```java
@@ -122,7 +122,7 @@ added in a future version.
-----
-See also the [parent page](../) for topics common to all fluent traversals.
+See also the [parent page](../README.md) for topics common to all fluent traversals.
[FluentGraphStatement]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/dse/driver/api/core/graph/FluentGraphStatement.html
[FluentGraphStatement.newInstance]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/dse/driver/api/core/graph/FluentGraphStatement.html#newInstance-org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.GraphTraversal-
diff --git a/manual/core/dse/graph/fluent/implicit/README.md b/manual/core/dse/graph/fluent/implicit/README.md
index f838c376022..dc720f0f7e7 100644
--- a/manual/core/dse/graph/fluent/implicit/README.md
+++ b/manual/core/dse/graph/fluent/implicit/README.md
@@ -42,7 +42,7 @@ completely *detached*: even though they contain the complete data, modifications
not be reflected on the server side.
Traversal sources with different configurations can easily be created through execution profiles in
-the [configuration](../../../../configuration/):
+the [configuration](../../../../configuration/README.md):
```
datastax-java-driver {
@@ -66,6 +66,6 @@ GraphTraversalSource a = AnonymousTraversalSource.traversal().withRemote(
-----
-See also the [parent page](../) for topics common to all fluent traversals.
+See also the [parent page](../README.md) for topics common to all fluent traversals.
[terminal step]: http://tinkerpop.apache.org/docs/current/reference/#terminal-steps
diff --git a/manual/core/dse/graph/options/README.md b/manual/core/dse/graph/options/README.md
index e4649ff34f3..d47d4ab71ca 100644
--- a/manual/core/dse/graph/options/README.md
+++ b/manual/core/dse/graph/options/README.md
@@ -19,7 +19,7 @@ under the License.
## Graph options
-There are various [configuration](../../../configuration/) options that control the execution of
+There are various [configuration](../../../configuration/README.md) options that control the execution of
graph statements. They can also be overridden programmatically on individual statements.
### Setting options
@@ -157,7 +157,7 @@ its own versioning scheme.
unset by default, and you should almost never have to change it: the driver sets it automatically
based on the information it knows about the server.
-There is one exception: if you use the [script API](../script/) against a legacy DSE version (5.0.3
+There is one exception: if you use the [script API](../script/README.md) against a legacy DSE version (5.0.3
or older), the driver infers the wrong protocol version. This manifests as a `ClassCastException`
when you try to deserialize complex result objects, such as vertices:
diff --git a/manual/core/dse/graph/results/README.md b/manual/core/dse/graph/results/README.md
index 3b4d25fa012..eb8ac71e3ba 100644
--- a/manual/core/dse/graph/results/README.md
+++ b/manual/core/dse/graph/results/README.md
@@ -19,7 +19,7 @@ under the License.
## Handling graph results
-[Script queries](../script/) and [explicit fluent traversals](../fluent/explicit/) return graph
+[Script queries](../script/README.md) and [explicit fluent traversals](../fluent/explicit/README.md) return graph
result sets, which are essentially iterables of [GraphNode].
### Synchronous / asynchronous result
@@ -39,7 +39,7 @@ was executed.
* `session.executeAsync` returns an [AsyncGraphResultSet]. It only holds the current page of
results, accessible via the `currentPage()` method. If the query is paged, the next pages must be
fetched explicitly using the `hasMorePages()` and `fetchNextPage()` methods. See [Asynchronous
- paging](../../../paging/#asynchronous-paging) for more details about how to work with async
+ paging](../../../paging/README.md#asynchronous-paging) for more details about how to work with async
types.
*Note: at the time of writing (DSE 6.0), graph queries are never paged. Results are always returned
diff --git a/manual/core/dse/graph/script/README.md b/manual/core/dse/graph/script/README.md
index cec8e4e94ef..70a43e45844 100644
--- a/manual/core/dse/graph/script/README.md
+++ b/manual/core/dse/graph/script/README.md
@@ -38,7 +38,7 @@ As demonstrated above, the simplest way to create a script statement is to pass
string to [ScriptGraphStatement.newInstance].
The default implementation returned by the driver is **immutable**; if you call additional methods
-on the statement -- for example to set [options](../options/) -- each method call will create a new
+on the statement -- for example to set [options](../options/README.md) -- each method call will create a new
copy:
```java
@@ -112,7 +112,7 @@ Alternatively, `withQueryParams` takes multiple parameters as a map.
Building requests as Java strings can be unwieldy, especially for long scripts. Besides, the script
API is a bit less performant on the server side. Therefore we recommend the
-[Fluent API](../fluent/) instead for graph traversals.
+[Fluent API](../fluent/README.md) instead for graph traversals.
Note however that some types of queries can only be performed through the script API:
diff --git a/manual/core/graalvm/README.md b/manual/core/graalvm/README.md
index d20fb739f19..82092746c0b 100644
--- a/manual/core/graalvm/README.md
+++ b/manual/core/graalvm/README.md
@@ -24,14 +24,14 @@ under the License.
* [GraalVM native images](https://www.graalvm.org/reference-manual/native-image/) can be built with
no additional configuration starting with driver 4.13.0.
* But extra configurations are required in a few cases:
- * When using [reactive programming](../reactive);
- * When using [Jackson](../integration#Jackson);
- * When using LZ4 [compression](../compression/);
- * Depending on the [logging backend](../logging) in use.
+ * When using [reactive programming](../reactive/README.md);
+ * When using [Jackson](../integration/README.md#jackson);
+ * When using LZ4 [compression](../compression/README.md);
+ * Depending on the [logging backend](../logging/README.md) in use.
* DSE-specific features:
- * [Geospatial types](../dse/geotypes) are supported.
- * [DSE Graph](../dse/graph) is not officially supported, although it may work.
-* The [shaded jar](../shaded_jar) is not officially supported, although it may work.
+ * [Geospatial types](../dse/geotypes/README.md) are supported.
+ * [DSE Graph](../dse/graph/README.md) is not officially supported, although it may work.
+* The [shaded jar](../shaded_jar/README.md) is not officially supported, although it may work.
-----
@@ -113,7 +113,7 @@ registered for reflection.
### Configuration resources
-The default driver [configuration](../configuration) mechanism is based on the TypeSafe Config
+The default driver [configuration](../configuration/README.md) mechanism is based on the TypeSafe Config
library. TypeSafe Config looks for a few classpath resources when initializing the configuration:
`reference.conf`, `application.conf`, `application.json`, `application.properties`. _These classpath
resources are all automatically included in the native image: you should not need to do it
@@ -124,7 +124,7 @@ resources are handled in native images.
### Configuring the logging backend
-When configuring [logging](../logging), the choice of a backend must be considered carefully, as
+When configuring [logging](../logging/README.md), the choice of a backend must be considered carefully, as
most logging backends resort to reflection during their configuration phase.
By default, GraalVM native images provide support for the java.util.logging (JUL) backend. See
@@ -135,7 +135,7 @@ native images are supported.
### Using reactive-style programming
-The [reactive execution model](../reactive) is compatible with GraalVM native images, but the
+The [reactive execution model](../reactive/README.md) is compatible with GraalVM native images, but the
following configurations must be added:
1. Create the following reflection.json file, or add the entry to an existing file:
@@ -151,7 +151,7 @@ following configurations must be added:
### Using the Jackson JSON library
-[Jackson](https://github.com/FasterXML/jackson) is used in [a few places](../integration#jackson) in
+[Jackson](https://github.com/FasterXML/jackson) is used in [a few places](../integration/README.md#jackson) in
the driver, but is an optional dependency; if you intend to use Jackson, the following
configurations must be added:
@@ -178,7 +178,7 @@ images, see below for more details – replace the above entries with the below
### Enabling compression
-When using [compression](../compression/), only LZ4 can be enabled in native images. **Snappy
+When using [compression](../compression/README.md), only LZ4 can be enabled in native images. **Snappy
compression is not supported.**
In order for LZ4 compression to work in a native image, the following additional GraalVM
@@ -242,7 +242,7 @@ configuration is required:
### Native calls
-The driver performs a few [native calls](../integration#native-libraries) using
+The driver performs a few [native calls](../integration/README.md#native-libraries) using
[JNR](https://github.com/jnr).
Starting with driver 4.7.0, native calls are also possible in a GraalVM native image, without any
@@ -252,7 +252,7 @@ extra configuration.
#### DSE Geospatial types
-DSE [Geospatial types](../dse/geotypes) are supported on GraalVM native images; the following
+DSE [Geospatial types](../dse/geotypes/README.md) are supported on GraalVM native images; the following
configurations must be added:
1. Create the following reflection.json file, or add the entry to an existing file:
@@ -277,7 +277,7 @@ images, as stated above – replace the above entry with the below one:
#### DSE Graph
-**[DSE Graph](../dse/graph) is not officially supported on GraalVM native images.**
+**[DSE Graph](../dse/graph/README.md) is not officially supported on GraalVM native images.**
The following configuration can be used as a starting point for users wishing to build a native
image for a DSE Graph application. DataStax does not guarantee however that the below configuration
@@ -327,7 +327,7 @@ will work in all cases. If the native image build fails, a good option is to use
### Using the shaded jar
-**The [shaded jar](../shaded_jar) is not officially supported in a GraalVM native image.**
+**The [shaded jar](../shaded_jar/README.md) is not officially supported in a GraalVM native image.**
However, it has been reported that the shaded jar can be included in a GraalVM native image as a
drop-in replacement for the regular driver jar for simple applications, without any extra GraalVM
diff --git a/manual/core/idempotence/README.md b/manual/core/idempotence/README.md
index be784dfa40b..c350306f4e0 100644
--- a/manual/core/idempotence/README.md
+++ b/manual/core/idempotence/README.md
@@ -39,13 +39,13 @@ For example:
Idempotence matters because the driver sometimes re-runs requests automatically:
-* [retries](../retries): if we're waiting for a response from a node and the connection gets
+* [retries](../retries/README.md): if we're waiting for a response from a node and the connection gets
dropped, the default retry policy automatically retries on another node. But we can't know what
went wrong with the first node: maybe it went down, or maybe it was just a network issue; in any
case, it might have applied the changes already. Therefore non-idempotent requests are never
retried.
-* [speculative executions](../speculative_execution): if they are enabled and a node takes too long
+* [speculative executions](../speculative_execution/README.md): if they are enabled and a node takes too long
to respond, the driver queries another node to get the response faster. But maybe both nodes will
eventually apply the changes. Therefore non-idempotent requests are never speculatively executed.
@@ -63,7 +63,7 @@ SimpleStatement statement =
.build();
```
-If you don't, they default to the value defined in the [configuration](../configuration/) by the
+If you don't, they default to the value defined in the [configuration](../configuration/README.md) by the
`basic.request.default-idempotence` option; out of the box, it is set to `false`.
When you prepare a statement, its idempotence carries over to bound statements:
@@ -77,7 +77,7 @@ assert bs.isIdempotent();
```
The query builder tries to infer idempotence automatically; refer to
-[its manual](../../query_builder/idempotence/) for more details.
+[its manual](../../query_builder/idempotence/README.md) for more details.
[Statement.setIdempotent]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/cql/Statement.html#setIdempotent-java.lang.Boolean-
[StatementBuilder.setIdempotence]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/cql/StatementBuilder.html#setIdempotence-java.lang.Boolean-
diff --git a/manual/core/integration/README.md b/manual/core/integration/README.md
index f2a96160bce..aac1dee4f16 100644
--- a/manual/core/integration/README.md
+++ b/manual/core/integration/README.md
@@ -25,7 +25,7 @@ under the License.
* explanations about [driver dependencies](#driver-dependencies) and when they can be manually
excluded.
-Note: guidelines to build a GraalVM native image can be found [here](../graalvm).
+Note: guidelines to build a GraalVM native image can be found [here](../graalvm/README.md).
-----
@@ -177,7 +177,7 @@ dependencies, and tell Maven that we're going to use Java 8:
##### Application configuration
`application.conf` is not stricly necessary, but it illustrates an important point about the
-driver's [configuration](../configuration/): you override any of the driver's default options here.
+driver's [configuration](../configuration/README.md): you override any of the driver's default options here.
```
datastax-java-driver {
@@ -189,7 +189,7 @@ In this case, we just specify a custom name for our session, it will appear in t
##### Logging configuration
-For this example, we choose Logback as our [logging framework](../logging/) (we added the dependency
+For this example, we choose Logback as our [logging framework](../logging/README.md) (we added the dependency
in `pom.xml`). `logback.xml` configures it to send the driver's `INFO` logs to the console.
```xml
@@ -211,7 +211,7 @@ dependency, or this file; but the default behavior is a bit verbose.
##### Main class
-`Main.java` is the canonical example introduced in our [quick start](../#quick-start); it connects
+`Main.java` is the canonical example introduced in our [quick start](../README.md#quick-start); it connects
to Cassandra, queries the server version and prints it:
```java
@@ -357,17 +357,17 @@ Here's a rundown of what you can customize:
[Netty](https://netty.io/) is the NIO framework that powers the driver's networking layer.
-It is a required dependency, but we provide a a [shaded JAR](../shaded_jar/) that relocates it to a
+It is a required dependency, but we provide a a [shaded JAR](../shaded_jar/README.md) that relocates it to a
different Java package; this is useful to avoid dependency hell if you already use Netty in another
part of your application.
#### Typesafe config
[Typesafe config](https://lightbend.github.io/config/) is used for our file-based
-[configuration](../configuration/).
+[configuration](../configuration/README.md).
It is a required dependency if you use the driver's built-in configuration loader, but this can be
-[completely overridden](../configuration/#bypassing-typesafe-config) with your own implementation,
+[completely overridden](../configuration/README.md#bypassing-typesafe-config) with your own implementation,
that could use a different framework or an ad-hoc solution.
In that case, you can exclude the dependency:
@@ -390,7 +390,7 @@ In that case, you can exclude the dependency:
The driver performs native calls with [JNR](https://github.com/jnr). This is used in two cases:
-* to access a microsecond-precision clock in [timestamp generators](../query_timestamps/);
+* to access a microsecond-precision clock in [timestamp generators](../query_timestamps/README.md);
* to get the process ID when generating [UUIDs][Uuids].
In both cases, this is completely optional; if system calls are not available on the current
@@ -420,11 +420,11 @@ The driver supports compression with either [LZ4](https://github.com/jpountz/lz4
[Snappy](http://google.github.io/snappy/).
These dependencies are optional; you have to add them explicitly in your application in order to
-enable compression. See the [Compression](../compression/) page for more details.
+enable compression. See the [Compression](../compression/README.md) page for more details.
#### Metrics
-The driver exposes [metrics](../metrics/) through the
+The driver exposes [metrics](../metrics/README.md) through the
[Dropwizard](http://metrics.dropwizard.io/4.1.2/) library.
The dependency is declared as required, but metrics are optional. If you've disabled all metrics, or
@@ -449,7 +449,7 @@ In addition, when using Dropwizard, "timer" metrics use
[HdrHistogram](http://hdrhistogram.github.io/HdrHistogram/) to record latency percentiles. At the
time of writing, these metrics are: `cql-requests`, `throttling.delay` and `cql-messages`; you can
also identify them by reading the comments in the [configuration
-reference](../configuration/reference/) (look for "exposed as a Timer").
+reference](../configuration/reference/README.md) (look for "exposed as a Timer").
If all of these metrics are disabled, or if you use a different metrics library, you can remove the
dependency:
@@ -472,9 +472,9 @@ dependency:
[Jackson](https://github.com/FasterXML/jackson) is used:
-* when connecting to [DataStax Astra](../../cloud/);
+* when connecting to [DataStax Astra](../../cloud/README.md);
* when Insights monitoring is enabled;
-* when [Json codecs](../custom_codecs) are being used.
+* when [Json codecs](../custom_codecs/README.md) are being used.
Jackson is declared as a required dependency, but the driver can operate normally without it. If you
don't use any of the above features, you can safely exclude the dependency:
@@ -495,7 +495,7 @@ don't use any of the above features, you can safely exclude the dependency:
#### Esri
-Our [geospatial types](../dse/geotypes/) implementation is based on the [Esri Geometry
+Our [geospatial types](../dse/geotypes/README.md) implementation is based on the [Esri Geometry
API](https://github.com/Esri/geometry-api-java).
For driver versions >= 4.4.0 and < 4.14.0 Esri is declared as a required dependency,
@@ -534,7 +534,7 @@ guaranteed to be fully compatible with DSE.
#### TinkerPop
-[Apache TinkerPop™](http://tinkerpop.apache.org/) is used in our [graph API](../dse/graph/),
+[Apache TinkerPop™](http://tinkerpop.apache.org/) is used in our [graph API](../dse/graph/README.md),
introduced in the OSS driver in version 4.4.0 (it was previously a feature only available in the
now-retired DSE driver).
@@ -601,7 +601,7 @@ Here are the recommended TinkerPop versions for each driver version:
#### Reactive Streams
[Reactive Streams](https://www.reactive-streams.org/) types are referenced in our [reactive
-API](../reactive/).
+API](../reactive/README.md).
The Reactive Streams API is declared as a required dependency, but the driver can operate normally
without it. If you never call any of the `executeReactive` methods, you can exclude the dependency:
@@ -674,7 +674,7 @@ The remaining core driver dependencies are the only ones that are truly mandator
* `java-driver-guava-shaded`, a shaded version of [Guava](https://github.com/google/guava). It is
relocated to a different package, and only used by internal driver code, so it should be
completely transparent to third-party code;
-* the [SLF4J](https://www.slf4j.org/) API for [logging](../logging/).
+* the [SLF4J](https://www.slf4j.org/) API for [logging](../logging/README.md).
[central_oss]: https://search.maven.org/#search%7Cga%7C1%7Ccom.datastax.oss
[maven_pom]: https://maven.apache.org/guides/introduction/introduction-to-the-pom.html
diff --git a/manual/core/load_balancing/README.md b/manual/core/load_balancing/README.md
index 3f391c14f56..aea950e4e9d 100644
--- a/manual/core/load_balancing/README.md
+++ b/manual/core/load_balancing/README.md
@@ -35,7 +35,7 @@ abbreviated LBP) is a central component that determines:
* which nodes the driver will communicate with;
* for each new query, which coordinator to pick, and which nodes to use as failover.
-It is defined in the [configuration](../configuration/):
+It is defined in the [configuration](../configuration/README.md):
```
datastax-java-driver.basic.load-balancing-policy {
@@ -50,7 +50,7 @@ datastax-java-driver.basic.load-balancing-policy {
For each node, the policy computes a *distance* that determines how connections will be established:
* `LOCAL` and `REMOTE` are "active" distances, meaning that the driver will keep open connections to
- this node. [Connection pools](../pooling/) can be sized independently for each distance.
+ this node. [Connection pools](../pooling/README.md) can be sized independently for each distance.
* `IGNORED` means that the driver will never attempt to connect.
Typically, the distance will reflect network topology (e.g. local vs. remote datacenter), although
@@ -63,7 +63,7 @@ datacenter traffic (see below to understand how to change this behavior).
Each time the driver executes a query, it asks the policy to compute a *query plan*, in other words
a list of nodes. The driver then tries each node in sequence, moving down the plan according to the
-[retry policy](../retries/) and [speculative execution policy](../speculative_execution/).
+[retry policy](../retries/README.md) and [speculative execution policy](../speculative_execution/README.md).
The contents and order of query plans are entirely implementation-specific, but policies typically
return plans that:
@@ -225,7 +225,7 @@ this option to any value greater than zero will have the following effects:
- The load balancing policies will assign the `REMOTE` distance to that many nodes *in each remote
datacenter*.
- The driver will then attempt to open connections to those nodes. The actual number of connections
- to open to each one of those nodes is configurable, see [Connection pools](../pooling/) for
+ to open to each one of those nodes is configurable, see [Connection pools](../pooling/README.md) for
more details. By default, the driver opens only one connection to each node.
- Those remote nodes (and only those) will then become eligible for inclusion in query plans,
effectively enabling cross-datacenter failover.
@@ -280,11 +280,11 @@ replicas that own the data being queried.
##### Providing routing information
-First make sure that [token metadata](../metadata/token/#configuration) is enabled.
+First make sure that [token metadata](../metadata/token/README.md#configuration) is enabled.
Then your statements need to provide:
-* a keyspace: if you use a [per-query keyspace](../statements/per_query_keyspace/), then it will be
+* a keyspace: if you use a [per-query keyspace](../statements/per_query_keyspace/README.md), then it will be
used for routing as well. Otherwise, the driver relies on [getRoutingKeyspace()];
* a routing key: it can be provided either by [getRoutingKey()] \(raw binary data) or
[getRoutingToken()] \(already hashed as a token).
@@ -297,7 +297,7 @@ CREATE TABLE testKs.sensor_data(id int, year int, ts timestamp, data double,
PRIMARY KEY ((id, year), ts));
```
-For [simple statements](../statements/simple/), routing information is never computed
+For [simple statements](../statements/simple/README.md), routing information is never computed
automatically:
```java
@@ -320,7 +320,7 @@ statement = statement.setRoutingKey(
session.execute(statement);
```
-For [bound statements](../statements/prepared/), the keyspace is always available; the routing key
+For [bound statements](../statements/prepared/README.md), the keyspace is always available; the routing key
is only available if all components of the partition key are bound as variables:
```java
@@ -341,7 +341,7 @@ assert statement2.getRoutingKeyspace() != null;
assert statement2.getRoutingKey() == null;
```
-For [batch statements](../statements/batch/), the routing information of each child statement is
+For [batch statements](../statements/batch/README.md), the routing information of each child statement is
inspected; the first non-null keyspace is used as the keyspace of the batch, and the first non-null
routing key as its routing key (the idea is that all children should have the same routing
information, since batches are supposed to operate on a single partition). If no child has any
@@ -410,7 +410,7 @@ that you wish to modify – but keep in mind that it may be simpler to just star
### Using multiple policies
-The load balancing policy can be overridden in [execution profiles](../configuration/#profiles):
+The load balancing policy can be overridden in [execution profiles](../configuration/README.md#execution-profiles):
```
datastax-java-driver {
diff --git a/manual/core/logging/README.md b/manual/core/logging/README.md
index e3f8bfa7777..d3428b3ee6b 100644
--- a/manual/core/logging/README.md
+++ b/manual/core/logging/README.md
@@ -25,7 +25,7 @@ under the License.
* config file examples for Logback and Log4J.
**If you're looking for information about the request logger, see the [request
-tracker](../request_tracker/#request-logger) page.**
+tracker](../request_tracker/README.md#request-logger) page.**
-----
diff --git a/manual/core/metadata/README.md b/manual/core/metadata/README.md
index 73609ee0542..2a456f31108 100644
--- a/manual/core/metadata/README.md
+++ b/manual/core/metadata/README.md
@@ -33,9 +33,9 @@ under the License.
The driver exposes metadata about the Cassandra cluster via the [Session#getMetadata] method. It
returns a [Metadata] object, which contains three types of information:
-* [node metadata](node/)
-* [schema metadata](schema/)
-* [token metadata](token/)
+* [node metadata](node/README.md)
+* [schema metadata](schema/README.md)
+* [token metadata](token/README.md)
Metadata is mostly **immutable** (except for the fields of the [Node] class, see the "node metadata"
link above for details). Each call to `getMetadata()` will return a **new copy** if something has
@@ -73,7 +73,7 @@ This is a big improvement over previous versions of the driver, where it was pos
new keyspace in the schema metadata before the token metadata was updated.
Schema and node state events are debounced. This allows you to control how often the metadata gets
-refreshed. See the [Performance](../performance/#debouncing) page for more details.
+refreshed. See the [Performance](../performance/README.md#debouncing) page for more details.
[Session#getMetadata]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/session/Session.html#getMetadata--
[Metadata]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/metadata/Metadata.html
diff --git a/manual/core/metadata/node/README.md b/manual/core/metadata/node/README.md
index fea04e5f262..7583a370dc5 100644
--- a/manual/core/metadata/node/README.md
+++ b/manual/core/metadata/node/README.md
@@ -134,7 +134,7 @@ context.getEventBus().fire(TopologyEvent.forceUp(node1.getConnectAddress()));
As shown by the imports above, forcing a node down requires the *internal* driver API, which is
reserved for expert usage and subject to the disclaimers in
-[API conventions](../../../api_conventions/).
+[API conventions](../../../api_conventions/README.md).
#### Using a custom topology monitor
diff --git a/manual/core/metadata/schema/README.md b/manual/core/metadata/schema/README.md
index 20521d1def4..9a5d9bb6125 100644
--- a/manual/core/metadata/schema/README.md
+++ b/manual/core/metadata/schema/README.md
@@ -48,7 +48,7 @@ for (TableMetadata table : system.getTables().values()) {
Schema metadata is fully immutable (both the map and all the objects it contains). It represents a
snapshot of the database at the time of the last metadata refresh, and is consistent with the
-[token map](../token/) of its parent `Metadata` object. Keep in mind that `Metadata` is itself
+[token map](../token/README.md) of its parent `Metadata` object. Keep in mind that `Metadata` is itself
immutable; if you need to get the latest schema, be sure to call
`session.getMetadata().getKeyspaces()` again (and not just `getKeyspaces()` on a stale `Metadata`
reference).
@@ -207,7 +207,7 @@ a few filters:
If an element is malformed, or if its regex has a syntax error, a warning is logged and that single
element is ignored.
-The default configuration (see [reference.conf](../../configuration/reference/)) excludes all
+The default configuration (see [reference.conf](../../configuration/reference/README.md)) excludes all
Cassandra and DSE system keyspaces.
Try to use only exact name inclusions if possible. This allows the driver to filter on the server
@@ -331,14 +331,14 @@ changes at the same time.
### Relation to token metadata
-Some of the data in the [token map](../token/) relies on keyspace metadata (any method that takes a
+Some of the data in the [token map](../token/README.md) relies on keyspace metadata (any method that takes a
`CqlIdentifier` argument). If schema metadata is disabled or filtered, token metadata will also be
unavailable for the excluded keyspaces.
### Performing schema updates from the client
If you issue schema-altering requests from the driver (e.g. `session.execute("CREATE TABLE ..")`),
-take a look at the [Performance](../../performance/#schema-updates) page for a few tips.
+take a look at the [Performance](../../performance/README.md#schema-updates) page for a few tips.
[Metadata#getKeyspaces]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/metadata/Metadata.html#getKeyspaces--
[SchemaChangeListener]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/metadata/schema/SchemaChangeListener.html
diff --git a/manual/core/metadata/token/README.md b/manual/core/metadata/token/README.md
index 4d7cd9252df..165a7fd5c6b 100644
--- a/manual/core/metadata/token/README.md
+++ b/manual/core/metadata/token/README.md
@@ -184,7 +184,7 @@ keep the value of the last refresh, and token-aware routing might operate on sta
#### Relation to schema metadata
The keyspace-specific information in `TokenMap` (all methods with a `CqlIdentifier` argument) relies
-on [schema metadata](../schema/). If schema metadata is disabled or filtered, token metadata will
+on [schema metadata](../schema/README.md). If schema metadata is disabled or filtered, token metadata will
also be unavailable for the excluded keyspaces.
diff --git a/manual/core/metrics/README.md b/manual/core/metrics/README.md
index ef5d9b453f0..7ebb793fd33 100644
--- a/manual/core/metrics/README.md
+++ b/manual/core/metrics/README.md
@@ -365,4 +365,4 @@ CSV files, SLF4J logs and Graphite. Refer to their [manual][Dropwizard manual] f
[Micrometer Metrics]: https://micrometer.io/docs
[Micrometer JMX]: https://micrometer.io/docs/registry/jmx
[MicroProfile Metrics]: https://github.com/eclipse/microprofile-metrics
-[reference configuration]: ../configuration/reference/
+[reference configuration]: ../configuration/reference/README.md
diff --git a/manual/core/native_protocol/README.md b/manual/core/native_protocol/README.md
index 42146e63f42..b1c7422a669 100644
--- a/manual/core/native_protocol/README.md
+++ b/manual/core/native_protocol/README.md
@@ -73,7 +73,7 @@ ProtocolVersion currentVersion = session.getContext().getProtocolVersion();
```
The protocol version cannot be changed at runtime. However, you can force a particular version in
-the [configuration](../configuration/):
+the [configuration](../configuration/README.md):
```
datastax-java-driver {
@@ -117,7 +117,7 @@ force the protocol version manually anymore.
### Debugging protocol negotiation
-You can observe the negotiation process in the [logs](../logging/).
+You can observe the negotiation process in the [logs](../logging/README.md).
The versions tried while negotiating with the first node are logged at level `DEBUG` in the category
`com.datastax.oss.driver.internal.core.channel.ChannelFactory`:
@@ -142,13 +142,13 @@ If you want to see the details of mixed cluster negotiation, enable `DEBUG` leve
#### v3 to v4
* [query warnings][ExecutionInfo.getWarnings]
-* [unset values in bound statements](../statements/prepared/#unset-values)
+* [unset values in bound statements](../statements/prepared/README.md#unset-values)
* [custom payloads][Request.getCustomPayload]
#### v4 to v5
-* [per-query keyspace](../statements/per_query_keyspace)
-* [improved prepared statement resilience](../statements/prepared/#prepared-statements-and-schema-changes)
+* [per-query keyspace](../statements/per_query_keyspace/README.md)
+* [improved prepared statement resilience](../statements/prepared/README.md#prepared-statements-and-schema-changes)
in the face of schema changes
[protocol spec]: https://github.com/datastax/native-protocol/tree/1.x/src/main/resources
diff --git a/manual/core/non_blocking/README.md b/manual/core/non_blocking/README.md
index f320ffd13d2..1d56c7a266d 100644
--- a/manual/core/non_blocking/README.md
+++ b/manual/core/non_blocking/README.md
@@ -37,7 +37,7 @@ These guarantees and their exceptions are detailed below. A final chapter explai
driver with BlockHound.
The developer guide also has more information on driver internals and its
-[concurrency model](../../developer/common/concurrency).
+[concurrency model](../../developer/common/concurrency/README.md).
### Definition of "non-blocking"
@@ -61,8 +61,8 @@ The driver offers many execution models. For the built-in ones, the lock-free gu
follows:
* The synchronous API is blocking and does not offer any lock-free guarantee.
-* The [asynchronous](../async) API is implemented in lock-free algorithms.
-* The [reactive](../reactive) API is implemented in lock-free algorithms (it's actually wait-free).
+* The [asynchronous](../async/README.md) API is implemented in lock-free algorithms.
+* The [reactive](../reactive/README.md) API is implemented in lock-free algorithms (it's actually wait-free).
For example, calling any synchronous method declared in [`SyncCqlSession`], such as [`execute`],
will block until the result is available. These methods should never be used in non-blocking
@@ -119,7 +119,7 @@ thread, and partially asynchronously on an internal driver thread.
the driver admin thread performing the initialization tasks must be allowed to block, at least
temporarily.
-[driver context]: ../../developer/common/context
+[driver context]: ../../developer/common/context/README.md
For the reasons above, the initialization phase obviously doesn't qualify as lock-free. For
non-blocking applications, it is generally advised to trigger session initialization during
@@ -155,10 +155,17 @@ should not be used if strict lock-freedom is enforced.
The `RateLimitingRequestThrottler` is currently blocking. The `ConcurrencyLimitingRequestThrottler`
is lock-free.
-See the section about [throttling](../throttling) for details about these components. Depending on
+See the section about [throttling](../throttling/README.md) for details about these components. Depending on
how many requests are being executed in parallel, the thread contention on these locks can be high:
in short, if your application enforces strict lock-freedom, then you should not use the
`RateLimitingRequestThrottler`.
+* `ConcurrencyLimitingRequestThrottler`
+* `RateLimitingRequestThrottler`
+
+See the section about [throttling](../throttling/README.md) for details about these components. Again, they
+use locks internally, and depending on how many requests are being executed in parallel, the thread
+contention on these locks can be high: in short, if your application enforces strict lock-freedom,
+then these components should not be used.
[request throttlers]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/session/throttling/RequestThrottler.html
@@ -182,12 +189,12 @@ this reason, it is advised that this method be called once during application st
safe to use it afterwards in a non-blocking context.
Alternatively, it's possible to disable the usage of client-side timestamp generation, and/or the
-usage of native libraries. See the manual sections on [query timestamps](../query_timestamps) and
-[integration](../integration) for more information.
+usage of native libraries. See the manual sections on [query timestamps](../query_timestamps/README.md) and
+[integration](../integration/README.md) for more information.
One component, the codec registry, can block when its [`register`] method is called; it is
therefore advised that codecs should be registered during application startup exclusively. See the
-[custom codecs](../custom_codecs) section for more details about registering codecs.
+[custom codecs](../custom_codecs/README.md) section for more details about registering codecs.
[`register`]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/type/codec/registry/MutableCodecRegistry.html#register-com.datastax.oss.driver.api.core.type.codec.TypeCodec-
@@ -243,7 +250,7 @@ Beware that a hot-reloading of the default configuration mechanism is performed
admin thread. If hot-reloading is enabled, then this might be reported by lock-freedom infringement
detectors. If that is the case, it is advised to disable hot-reloading by setting the
`datastax-java-driver.basic.config-reload-interval` option to 0. See the manual page on
-[configuration](../configuration) for more information.
+[configuration](../configuration/README.md) for more information.
[`DriverConfigLoader`]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/config/DriverConfigLoader.html
[hot-reloading]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/config/DriverConfigLoader.html#supportsReloading--
@@ -264,8 +271,8 @@ The driver has its own mechanism for detecting blocking calls happening on an in
thread. This mechanism is capable of detecting and reporting blatant cases of misuse of the
asynchronous and reactive APIs, e.g. when the synchronous API is invoked inside a future or callback
produced by the asynchronous execution of a statement. See the core manual page on the
-[asynchronous](../async) API or the developer manual page on
-[driver concurrency](../../developer/common/concurrency) for details.
+[asynchronous](../async/README.md) API or the developer manual page on
+[driver concurrency](../../developer/common/concurrency/README.md) for details.
The driver is not capable, however, of detecting low-level lock-freedom infringements, such as the
usage of locks. You must use an external tool to achieve that. See below how to use BlockHound for
diff --git a/manual/core/paging/README.md b/manual/core/paging/README.md
index 2df92bd69d1..c5f445ef311 100644
--- a/manual/core/paging/README.md
+++ b/manual/core/paging/README.md
@@ -50,7 +50,7 @@ datastax-java-driver.basic.request.page-size = 5000
It can be changed at runtime (the new value will be used for requests issued after the change). If
you have categories of queries that require different page sizes, use
-[configuration profiles](../configuration#profiles).
+[configuration profiles](../configuration/README.md#execution-profiles).
Note that the page size is merely a hint; the server will not always return the exact number of
rows, it might decide to return slightly more or less.
@@ -153,7 +153,7 @@ private CompletionStage countRows(AsyncResultSet resultSet, int previou
}
```
-See [Asynchronous programming](../async/) for more tips about the async API.
+See [Asynchronous programming](../async/README.md) for more tips about the async API.
### Saving and reusing the paging state
@@ -199,7 +199,7 @@ to reinject it in the wrong statement. This allows you to detect the error early
roundtrip to the server.
Note that, if you use a simple statement and one of the bound values requires a [custom
-codec](../custom_codecs), you have to provide a reference to the session when reinjecting the paging
+codec](../custom_codecs/README.md), you have to provide a reference to the session when reinjecting the paging
state:
```java
@@ -249,7 +249,7 @@ rs = session.execute(query);
OffsetPager.Page page5 = pager.getPage(rs, 5);
```
-Note that `getPage` can also process the entity iterables returned by the [mapper](../../mapper/).
+Note that `getPage` can also process the entity iterables returned by the [mapper](../../mapper/README.md).
#### Establishing application-level guardrails
diff --git a/manual/core/performance/README.md b/manual/core/performance/README.md
index 3afb321968e..97a892e6b89 100644
--- a/manual/core/performance/README.md
+++ b/manual/core/performance/README.md
@@ -26,7 +26,7 @@ easy reference if you're benchmarking your application or diagnosing performance
### Statements
-[Statements](../statements/) are some of the driver types you'll use the most. Every request needs
+[Statements](../statements/README.md) are some of the driver types you'll use the most. Every request needs
one -- even `session.execute(String)` creates a `SimpleStatement` under the hood.
#### Immutability and builders
@@ -47,7 +47,7 @@ initialized statically and stored as constants.
#### Prepared statements
-[Prepared statements](../statements/prepared) allow Cassandra to cache parsed query strings
+[Prepared statements](../statements/prepared/README.md) allow Cassandra to cache parsed query strings
server-side, but that's not their only benefit for performance:
* the driver also caches the response metadata, which can then be skipped in subsequent responses.
@@ -91,34 +91,34 @@ By default, the driver opens 1 connection per node, and allows 1024 concurrent r
connection. In our experience this is enough for most scenarios.
If your application generates a very high throughput (hundreds of thousands of requests per second),
-you might want to experiment with different settings. See the [tuning](../pooling/#tuning) section
+you might want to experiment with different settings. See the [tuning](../pooling/README.md#tuning) section
in the connection pooling page.
#### Compression
-Consider [compression](../compression/) if your queries return large payloads; it might help to
+Consider [compression](../compression/README.md) if your queries return large payloads; it might help to
reduce network traffic.
#### Timestamp generation
-Each query is assigned a [timestamp](../query_timestamps/) to order them relative to each other.
+Each query is assigned a [timestamp](../query_timestamps/README.md) to order them relative to each other.
By default, this is done driver-side with
-[AtomicTimestampGenerator](../query_timestamps/#atomic-timestamp-generator). This is a very simple
+[AtomicTimestampGenerator](../query_timestamps/README.md#atomictimestampgenerator). This is a very simple
operation so unlikely to be a bottleneck, but note that there are other options, such as a
-[thread-local](../query_timestamps/#thread-local-timestamp-generator) variant that creates slightly
+[thread-local](../query_timestamps/README.md#threadlocaltimestampgenerator) variant that creates slightly
less contention, writing your own implementation or letting the server assign timestamps.
#### Tracing
-[Tracing](../tracing/) should be used for only a small percentage of your queries. It consumes
+[Tracing](../tracing/README.md) should be used for only a small percentage of your queries. It consumes
additional resources on the server, and fetching each trace requires background requests.
Do not enable tracing for every request; it's a sure way to bring your performance down.
#### Request trackers
-[Request trackers](../request_tracker/) are on the hot path (that is, invoked on I/O threads, each
+[Request trackers](../request_tracker/README.md) are on the hot path (that is, invoked on I/O threads, each
time a request is executed), and users can plug custom implementations.
If you experience throughput issues, check if any trackers are configured, and what they are doing.
@@ -126,7 +126,7 @@ They should avoid blocking calls, as well as any CPU-intensive computations.
#### Metrics
-Similarly, some of the driver's [metrics](../metrics/) are updated for every request (if the metric
+Similarly, some of the driver's [metrics](../metrics/README.md) are updated for every request (if the metric
is enabled).
By default, the driver ships with all metrics disabled. Enable them conservatively, and if you're
@@ -135,7 +135,7 @@ cause.
#### Throttling
-[Throttling](../throttling/) can help establish more predictable server performance, by controlling
+[Throttling](../throttling/README.md) can help establish more predictable server performance, by controlling
how much load each driver instance is allowed to put on the cluster. The throttling algorithm itself
incurs a bit of overhead in the driver, but that shouldn't be a problem since the goal is to stay
under reasonable rates in the first place.
@@ -151,7 +151,7 @@ private fields or constants to alleviate GC pressure.
#### Identifiers
-The driver uses [CqlIdentifier] to deal with [case sensitivity](../../case_sensitivity). When you
+The driver uses [CqlIdentifier] to deal with [case sensitivity](../../case_sensitivity/README.md). When you
call methods that take raw strings, the driver generally wraps them under the hood:
```java
@@ -182,7 +182,7 @@ pst.bind().setInt("age", 25);
#### Type tokens
[GenericType] is used to express complex generic types -- such as
-[nested collections](../#collection-types) -- in getters and setters. These objects are immutable
+[nested collections](../README.md#collection-types) -- in getters and setters. These objects are immutable
and stateless, so they are good candidates for constants:
```java
@@ -196,7 +196,7 @@ to store yours.
#### Built queries
-Similarly, [built queries](../../query_builder/) are immutable and don't need a reference to a live
+Similarly, [built queries](../../query_builder/README.md) are immutable and don't need a reference to a live
driver instance. If you create them statically, they can be stored as constants:
```java
@@ -209,7 +209,7 @@ already happens at initialization time.
#### Derived configuration profiles
-The configuration API allows you to build [derived profiles](../configuration/#derived-profiles) at
+The configuration API allows you to build [derived profiles](../configuration/README.md#derived-profiles) at
runtime.
```java
@@ -224,7 +224,7 @@ of recreating them each time.
### Metadata
-The driver maintains [metadata](../metadata/) about the state of the Cassandra cluster. This work is
+The driver maintains [metadata](../metadata/README.md) about the state of the Cassandra cluster. This work is
done on dedicated "admin" threads (see the [thread pooling](#thread-pooling) section below), so it's
not in direct competition with regular requests.
@@ -245,12 +245,12 @@ This will save CPU and memory resources, but you lose some driver features:
* if schema is disabled, `session.getMetadata().getKeyspaces()` will always be empty: your
application won't be able to inspect the database schema dynamically.
* if the token map is disabled, `session.getMetadata().getTokenMap()` will always be empty, and you
- lose the ability to use [token-aware routing](../load_balancing/#token-aware).
+ lose the ability to use [token-aware routing](../load_balancing/README.md#token-aware).
Note that disabling the schema implicitly disables the token map (because computing the token map
requires the keyspace replication settings).
-Perhaps more interestingly, metadata can be [filtered](../metadata/schema/#filtering) to a specific
+Perhaps more interestingly, metadata can be [filtered](../metadata/schema/README.md#filtering) to a specific
subset of keyspaces. This is handy if you connect to a shared cluster that holds data for multiple
applications:
@@ -260,7 +260,7 @@ datastax-java-driver.advanced.metadata {
}
```
-To get a sense of the time spent on metadata refreshes, enable [debug logs](../logging/) and look
+To get a sense of the time spent on metadata refreshes, enable [debug logs](../logging/README.md) and look
for entries like this:
```
@@ -319,7 +319,7 @@ You should group your schema changes as much as possible.
Every change made from a client will be pushed to all other clients, causing them to refresh their
metadata. If you have multiple client instances, it might be a good idea to
-[deactivate the metadata](../metadata/schema/#enabling-disabling) on all clients while you apply the
+[deactivate the metadata](../metadata/schema/README.md#enablingdisabling) on all clients while you apply the
updates, and reactivate it at the end (reactivating will trigger an immediate refresh, so you might
want to ramp up clients to avoid a "thundering herd" effect).
@@ -327,7 +327,7 @@ Schema changes have to replicate to all nodes in the cluster. To minimize the ch
disagreement errors:
* apply your changes serially. The driver handles this automatically by checking for
- [schema agreement](../metadata/schema/#schema-agreement) after each DDL query. Run them from the
+ [schema agreement](../metadata/schema/README.md#schema-agreement) after each DDL query. Run them from the
same application thread, and, if you use the asynchronous API, chain the futures properly.
* send all the changes to the same coordinator. This is one of the rare cases where we recommend
using [Statement.setNode()].
@@ -346,7 +346,7 @@ The driver architecture is designed around two code paths:
* the driver's "timer" thread for request timeouts and speculative executions. See
`datastax-java-driver.advanced.netty.timer`.
* the **cold path** is for all administrative tasks: managing the
- [control connection](../control_connection), parsing [metadata](../metadata/), reacting to cluster
+ [control connection](../control_connection/README.md), parsing [metadata](../metadata/README.md), reacting to cluster
events (node going up/down, getting added/removed, etc), and scheduling periodic events
(reconnections, reloading the configuration). Comparatively, these tasks happen less often, and
are less critical (for example, stale schema metadata is not a blocker for request execution).
@@ -359,7 +359,7 @@ every case is different, but you might want to try lowering I/O threads, especia
application already creates a lot of threads on its side.
Note that you can gain more fine-grained control over thread pools via the
-[internal](../../api_conventions) API (look at the `NettyOptions` interface). In particular, it is
+[internal](../../api_conventions/README.md) API (look at the `NettyOptions` interface). In particular, it is
possible to reuse the same event loop group for I/O, admin tasks, and even your application code
(the driver's internal code is fully asynchronous so it will never block any thread). The timer is
the only one that will have to stay on a separate thread.
diff --git a/manual/core/pooling/README.md b/manual/core/pooling/README.md
index 578de6b4abd..423b4e3433f 100644
--- a/manual/core/pooling/README.md
+++ b/manual/core/pooling/README.md
@@ -52,7 +52,7 @@ You don't need to manage connections yourself. You simply interact with a [CqlSe
takes care of it.
**For a given session, there is one connection pool per connected node** (a node is connected when
-it is up and not ignored by the [load balancing policy](../load_balancing/)).
+it is up and not ignored by the [load balancing policy](../load_balancing/README.md)).
The number of connections per pool is configurable (this will be described in the next section).
There are up to 32768 stream ids per connection.
@@ -65,7 +65,7 @@ There are up to 32768 stream ids per connection.
### Configuration
-Pool sizes are defined in the `connection` section of the [configuration](../configuration/). Here
+Pool sizes are defined in the `connection` section of the [configuration](../configuration/README.md). Here
are the relevant options with their default values:
```
@@ -112,7 +112,7 @@ the change.
### Monitoring
-The driver exposes node-level [metrics](../metrics/) to monitor your pools (note that all metrics
+The driver exposes node-level [metrics](../metrics/README.md) to monitor your pools (note that all metrics
are disabled by default, you'll need to change your configuration to enable them):
```
@@ -166,7 +166,7 @@ improvement: the server is only going to service so many requests at a time anyw
requests are just going to pile up.
Lowering the value is not a good idea either. If your goal is to limit the global throughput of the
-driver, a [throttler](../throttling) is a better solution.
+driver, a [throttler](../throttling/README.md) is a better solution.
#### Number of connections per node
diff --git a/manual/core/query_timestamps/README.md b/manual/core/query_timestamps/README.md
index 4498afe21c4..f2d2e5c2daa 100644
--- a/manual/core/query_timestamps/README.md
+++ b/manual/core/query_timestamps/README.md
@@ -51,7 +51,7 @@ session.execute("INSERT INTO my_table(c1, c2) values (1, 1) " +
The driver has a timestamp generator that gets invoked for every outgoing request; it either assigns
a client-side timestamp to the request, or indicates that the server should assign it.
-The timestamp generator is defined in the [configuration](../configuration/).
+The timestamp generator is defined in the [configuration](../configuration/README.md).
#### AtomicTimestampGenerator
@@ -148,7 +148,7 @@ implementation class from the configuration.
#### Using multiple generators
-The timestamp generator can be overridden in [execution profiles](../configuration/#profiles):
+The timestamp generator can be overridden in [execution profiles](../configuration/README.md#execution-profiles):
```
datastax-java-driver {
diff --git a/manual/core/reactive/README.md b/manual/core/reactive/README.md
index 37a2e3411b8..a84e133132f 100644
--- a/manual/core/reactive/README.md
+++ b/manual/core/reactive/README.md
@@ -33,7 +33,7 @@ Notes:
* For historical reasons, reactive-related driver types reside in a package prefixed with `dse`;
however, reactive queries also work with regular Cassandra.
* The reactive execution model is implemented in a non-blocking fashion: see the manual page on
- [non-blocking programming](../non_blocking) for details.
+ [non-blocking programming](../non_blocking/README.md) for details.
### Overview
@@ -399,8 +399,8 @@ more fine-grained control of what should be retried, and how, is required.
[ReactiveRow.getExecutionInfo]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/dse/driver/api/core/cql/reactive/ReactiveRow.html#getExecutionInfo--
[ReactiveRow.wasApplied]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/dse/driver/api/core/cql/reactive/ReactiveRow.html#wasApplied--
-[built-in retry mechanism]: ../retries/
-[request throttling]: ../throttling/
+[built-in retry mechanism]: ../retries/README.md
+[request throttling]: ../throttling/README.md
[Managing concurrency in asynchronous query execution]: https://docs.datastax.com/en/devapp/doc/devapp/driverManagingConcurrency.html]
[Publisher]: https://www.reactive-streams.org/reactive-streams-1.0.2-javadoc/org/reactivestreams/Publisher.html
diff --git a/manual/core/reconnection/README.md b/manual/core/reconnection/README.md
index 3eb6dad9c05..0d875e3eff8 100644
--- a/manual/core/reconnection/README.md
+++ b/manual/core/reconnection/README.md
@@ -36,17 +36,17 @@ When a connection is lost, try to reestablish it at configured intervals.
If a running session loses a connection to a node, it tries to re-establish it according to a
configurable policy. This is used in two places:
-* [connection pools](../pooling/): for each node, a session has a fixed-size pool of connections to
+* [connection pools](../pooling/README.md): for each node, a session has a fixed-size pool of connections to
execute user requests. If one or more connections drop, a reconnection gets started for the pool;
each attempt tries to reopen the missing number of connections. This goes on until the pool is
back to its expected size;
-* [control connection](../control_connection/): a session uses a single connection to an arbitrary
+* [control connection](../control_connection/README.md): a session uses a single connection to an arbitrary
node for administrative requests. If that connection goes down, a reconnection gets started; each
attempt iterates through all active nodes until one of them accepts a connection. This goes on
until we have a control node again.
The reconnection policy controls the interval between each attempt. It is defined in the
-[configuration](../configuration/):
+[configuration](../configuration/README.md):
```
datastax-java-driver {
@@ -84,7 +84,7 @@ is the exponential one with the default values, and the control connection is in
* [t = 2.2], node3's pool tries to open its missing connection, which succeeds. The pool is back to
its expected size, node3's reconnection stops;
* [t = 2.5] the control connection tries to find a new node. It invokes the
- [load balancing policy](../load_balancing/) to get a query plan, which happens to start with
+ [load balancing policy](../load_balancing/README.md) to get a query plan, which happens to start with
node4. The connection succeeds, node4 is now the control node and the reconnection stops;
* [t = 3] node2's pool tries to open the last missing connection, which succeeds. The pool is back
to its expected size, node2's reconnection stops.
diff --git a/manual/core/request_id/README.md b/manual/core/request_id/README.md
index a766a4419af..ea22aaba516 100644
--- a/manual/core/request_id/README.md
+++ b/manual/core/request_id/README.md
@@ -34,7 +34,7 @@ Usage:
### Request Id Generator Configuration
-Request ID generator can be declared in the [configuration](../configuration/) as follows:
+Request ID generator can be declared in the [configuration](../configuration/README.md) as follows:
```
datastax-java-driver.advanced.request-id.generator {
diff --git a/manual/core/request_tracker/README.md b/manual/core/request_tracker/README.md
index c135abfe53f..a70e6b219ee 100644
--- a/manual/core/request_tracker/README.md
+++ b/manual/core/request_tracker/README.md
@@ -35,7 +35,7 @@ every application request. The driver comes with an optional implementation that
### Configuration
-Request trackers can be declared in the [configuration](../configuration/) as follows:
+Request trackers can be declared in the [configuration](../configuration/README.md) as follows:
```
datastax-java-driver.advanced.request-tracker {
diff --git a/manual/core/retries/README.md b/manual/core/retries/README.md
index e92f8e214aa..2afa5585217 100644
--- a/manual/core/retries/README.md
+++ b/manual/core/retries/README.md
@@ -26,7 +26,7 @@ What to do when a request failed on a node: retry (same or other node), rethrow,
* `advanced.retry-policy` in the configuration. Default policy retries at most once, in cases that
have a high chance of success; you can also write your own.
* can have per-profile policies.
-* only kicks in if the query is [idempotent](../idempotence).
+* only kicks in if the query is [idempotent](../idempotence/README.md).
-----
@@ -60,7 +60,7 @@ use this retry policy if you understand the consequences.**
Since `DefaultRetryPolicy` is already the driver's default retry policy, no special configuration
is required to activate it. To use `ConsistencyDowngradingRetryPolicy` instead, the following
-option must be declared in the driver [configuration](../configuration/):
+option must be declared in the driver [configuration](../configuration/README.md):
```
datastax-java-driver.advanced.retry-policy.class = ConsistencyDowngradingRetryPolicy
@@ -78,7 +78,7 @@ The policy has several methods that cover different error cases. Each method ret
what to do next. There are four possible retry decisions:
* retry on the same node;
-* retry on the next node in the [query plan](../load_balancing/) for this statement;
+* retry on the next node in the [query plan](../load_balancing/README.md) for this statement;
* rethrow the exception to the user code (from the `session.execute` call, or as a failed future if
using the asynchronous API);
* ignore the exception. That is, mark the request as successful, and return an empty result set.
@@ -144,7 +144,7 @@ mutation was applied or not on the non-answering replica.
If the policy rethrows the error, the user code will get a [WriteTimeoutException].
-This method is only invoked for [idempotent](../idempotence/) statements. Otherwise, the driver
+This method is only invoked for [idempotent](../idempotence/README.md) statements. Otherwise, the driver
bypasses the retry policy and always rethrows the error.
The default policy triggers a maximum of one retry (to the same node), and only for a `BATCH_LOG`
@@ -173,10 +173,10 @@ cases:
* if the connection was closed due to an external event. This will manifest as a
[ClosedConnectionException] \(network failure) or [HeartbeatException] \(missed
- [heartbeat](../pooling/#heartbeat));
+ [heartbeat](../pooling/README.md#heartbeat));
* if there was an unexpected error while decoding the response (this can only be a driver bug).
-This method is only invoked for [idempotent](../idempotence/) statements. Otherwise, the driver
+This method is only invoked for [idempotent](../idempotence/README.md) statements. Otherwise, the driver
bypasses the retry policy and always rethrows the error.
Both the default policy and `ConsistencyDowngradingRetryPolicy` retry on the next node if the
@@ -188,7 +188,7 @@ The coordinator replied with an error other than `READ_TIMEOUT`, `WRITE_TIMEOUT`
Namely, this covers [OverloadedException], [ServerError], [TruncateException],
[ReadFailureException] and [WriteFailureException].
-This method is only invoked for [idempotent](../idempotence/) statements. Otherwise, the driver
+This method is only invoked for [idempotent](../idempotence/README.md) statements. Otherwise, the driver
bypasses the retry policy and always rethrows the error.
Both the default policy and `ConsistencyDowngradingRetryPolicy` rethrow read and write failures,
@@ -200,14 +200,14 @@ There are a few cases where retrying is always the right thing to do. These are
`RetryPolicy`, but instead hard-coded in the driver:
* **any error before a network write was attempted**: to send a query, the driver selects a node,
- borrows a connection from the host's [connection pool](../pooling/), and then writes the message
+ borrows a connection from the host's [connection pool](../pooling/README.md), and then writes the message
to the connection. Errors can occur before the write was even attempted, for example if the
connection pool is saturated, or if the node went down right after we borrowed. In those cases, it
is always safe to retry since the request wasn't sent, so the driver will transparently move to
the next node in the query plan.
* **re-preparing a statement**: when the driver executes a prepared statement, it may find out that
the coordinator doesn't know about it, and need to re-prepare it on the fly (this is described in
- detail [here](../statements/prepared/)). The query is then retried on the same node.
+ detail [here](../statements/prepared/README.md)). The query is then retried on the same node.
* **trying to communicate with a node that is bootstrapping**: this is a rare edge case, as in
practice the driver should never try to communicate with a bootstrapping node (the only way is if
it was specified as a contact point). It is again safe to assume that the query was not executed
@@ -222,7 +222,7 @@ directly to the user. These include:
### Using multiple policies
-The retry policy can be overridden in [execution profiles](../configuration/#profiles):
+The retry policy can be overridden in [execution profiles](../configuration/README.md#execution-profiles):
```
datastax-java-driver {
diff --git a/manual/core/shaded_jar/README.md b/manual/core/shaded_jar/README.md
index 8e183c0efb5..4272683ff63 100644
--- a/manual/core/shaded_jar/README.md
+++ b/manual/core/shaded_jar/README.md
@@ -20,11 +20,11 @@ under the License.
## Using the shaded JAR
The default `java-driver-core` JAR depends on a number of [third party
-libraries](../integration/#driver-dependencies). This can create conflicts if your application
+libraries](../integration/README.md#driver-dependencies). This can create conflicts if your application
already uses other versions of those same dependencies.
-To avoid this, we provide an alternative core artifact that shades [Netty](../integration/#netty),
-[Jackson](../integration/#jackson) and [ESRI](../integration/#esri). To use it, replace the
+To avoid this, we provide an alternative core artifact that shades [Netty](../integration/README.md#netty),
+[Jackson](../integration/README.md#jackson) and [ESRI](../integration/README.md#esri). To use it, replace the
dependency to `java-driver-core` by:
```xml
diff --git a/manual/core/speculative_execution/README.md b/manual/core/speculative_execution/README.md
index 5666d6a1363..828e07af2ba 100644
--- a/manual/core/speculative_execution/README.md
+++ b/manual/core/speculative_execution/README.md
@@ -91,12 +91,12 @@ details and how to enable them.
### Query idempotence
-If a query is [not idempotent](../idempotence/), the driver will never schedule speculative
+If a query is [not idempotent](../idempotence/README.md), the driver will never schedule speculative
executions for it, because there is no way to guarantee that only one node will apply the mutation.
### Configuration
-Speculative executions are controlled by a policy defined in the [configuration](../configuration/).
+Speculative executions are controlled by a policy defined in the [configuration](../configuration/README.md).
The default implementation never schedules an execution:
```
@@ -138,7 +138,7 @@ referencing your implementation class from the configuration.
### How speculative executions affect retries
-Turning on speculative executions doesn't change the driver's [retry](../retries/) behavior. Each
+Turning on speculative executions doesn't change the driver's [retry](../retries/README.md) behavior. Each
parallel execution will trigger retries independently:
```ditaa
@@ -183,7 +183,7 @@ executions increase the pressure on the cluster.
If you use speculative executions to avoid unhealthy nodes, a good-behaving node should rarely hit
the threshold. We recommend running a benchmark on a healthy platform (all nodes up and healthy) and
-monitoring the request percentiles with the `cql-requests` [metric](../metrics/). Then use the
+monitoring the request percentiles with the `cql-requests` [metric](../metrics/README.md). Then use the
latency at a high percentile (for example p99.9) as the threshold.
Alternatively, maybe low latency is your absolute priority, and you are willing to take the
@@ -191,27 +191,27 @@ increased throughput as a tradeoff. In that case, set the threshold to 0 and pro
accordingly.
You can monitor the number of speculative executions triggered by each node with the
-`speculative-executions` [metric](../metrics/).
+`speculative-executions` [metric](../metrics/README.md).
#### Stream id exhaustion
One side-effect of speculative executions is that many requests get cancelled, which can lead to a
phenomenon called *stream id exhaustion*: each TCP connection can handle multiple simultaneous
-requests, identified by a unique number called *stream id* (see also the [pooling](../pooling/)
+requests, identified by a unique number called *stream id* (see also the [pooling](../pooling/README.md)
section). When a request gets cancelled, we can't reuse its stream id immediately because we might
still receive a response from the server later. If this happens often, the number of available
stream ids diminishes over time, and when it goes below a given threshold we close the connection
and create a new one. If requests are often cancelled, you will see connections being recycled at a
high rate.
-The best way to monitor this is to compare the `pool.orphaned-streams` [metric](../metrics/) to the
+The best way to monitor this is to compare the `pool.orphaned-streams` [metric](../metrics/README.md) to the
total number of available stream ids (which can be computed from the configuration:
`pool.local.size * max-requests-per-connection`). The `pool.available-streams` and `pool.in-flight`
metrics will also give you an idea of how many stream ids are left for active queries.
#### Request ordering
-Note: ordering issues are only a problem with [server-side timestamps](../query_timestamps/), which
+Note: ordering issues are only a problem with [server-side timestamps](../query_timestamps/README.md), which
are not the default anymore in driver 4+. So unless you've explicitly enabled
`ServerSideTimestampGenerator`, you can skip this section.
@@ -235,12 +235,12 @@ The workaround is to either specify a timestamp in your CQL queries:
insert into my_table (k, v) values (1, 1) USING TIMESTAMP 1432764000;
-Or use a client-side [timestamp generator](../query_timestamps/).
+Or use a client-side [timestamp generator](../query_timestamps/README.md).
### Using multiple policies
The speculative execution policy can be overridden in [execution
-profiles](../configuration/#profiles):
+profiles](../configuration/README.md#execution-profiles):
```
datastax-java-driver {
diff --git a/manual/core/ssl/README.md b/manual/core/ssl/README.md
index 913c7bc6c9a..9587ce77218 100644
--- a/manual/core/ssl/README.md
+++ b/manual/core/ssl/README.md
@@ -102,7 +102,7 @@ already be the case if you've followed the steps for inter-node encryption).
By default, the driver's SSL support is based on the JDK's built-in implementation: JSSE (Java
Secure Socket Extension).
-To enable it, you need to define an engine factory in the [configuration](../configuration/).
+To enable it, you need to define an engine factory in the [configuration](../configuration/README.md).
#### JSSE, property-based
@@ -225,7 +225,7 @@ CqlSession session = CqlSession.builder()
Netty supports native integration with OpenSSL / boringssl. The driver does not provide this out of
the box, but with a bit of custom development it is fairly easy to add. See
-[SslHandlerFactory](../../developer/netty_pipeline/#ssl-handler-factory) in the developer docs.
+[SslHandlerFactory](../../developer/netty_pipeline/README.md#sslhandlerfactory) in the developer docs.
[dsClientToNode]: https://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/secureSSLClientToNode.html
diff --git a/manual/core/statements/README.md b/manual/core/statements/README.md
index 394e81ae00e..2e7c3506b2d 100644
--- a/manual/core/statements/README.md
+++ b/manual/core/statements/README.md
@@ -33,22 +33,22 @@ To execute a CQL query, you create a [Statement] instance and pass it to
[Session#execute][execute] or [Session#executeAsync][executeAsync]. The driver provides various
implementations:
-* [SimpleStatement](simple/): a simple implementation built directly from a character string.
+* [SimpleStatement](simple/README.md): a simple implementation built directly from a character string.
Typically used for queries that are executed only once or a few times.
-* [BoundStatement (from PreparedStatement)](prepared/): obtained by binding values to a prepared
+* [BoundStatement (from PreparedStatement)](prepared/README.md): obtained by binding values to a prepared
query. Typically used for queries that are executed often, with different values.
-* [BatchStatement](batch/): a statement that groups multiple statements to be executed as a batch.
+* [BatchStatement](batch/README.md): a statement that groups multiple statements to be executed as a batch.
All statement types share a [common set of execution attributes][StatementBuilder], that can be set
through either setters or a builder:
-* [execution profile](../configuration/) name, or the profile itself if it's been built dynamically.
-* [idempotent flag](../idempotence/).
-* [tracing flag](../tracing/).
-* [query timestamp](../query_timestamps/).
-* [page size and paging state](../paging/).
-* [per-query keyspace](per_query_keyspace/) (Cassandra 4 or above).
-* [token-aware routing](../load_balancing/#token-aware) information (keyspace and key/token).
+* [execution profile](../configuration/README.md) name, or the profile itself if it's been built dynamically.
+* [idempotent flag](../idempotence/README.md).
+* [tracing flag](../tracing/README.md).
+* [query timestamp](../query_timestamps/README.md).
+* [page size and paging state](../paging/README.md).
+* [per-query keyspace](per_query_keyspace/README.md) (Cassandra 4 or above).
+* [token-aware routing](../load_balancing/README.md#token-aware) information (keyspace and key/token).
* normal and serial consistency level.
* query timeout.
* custom payload to send arbitrary key/value pairs with the request (you should only need this if
@@ -74,7 +74,7 @@ such as [ErrorProne](https://errorprone.info/) -- can check correct usage at bui
mistakes as compiler errors.
Note that some attributes can either be set programmatically, or inherit a default value defined in
-the [configuration](../configuration/). Namely, these are: idempotent flag, query timeout,
+the [configuration](../configuration/README.md). Namely, these are: idempotent flag, query timeout,
consistency levels and page size. We recommended the configuration approach whenever possible (you
can create execution profiles to capture common combinations of those options).
diff --git a/manual/core/statements/per_query_keyspace/README.md b/manual/core/statements/per_query_keyspace/README.md
index 9a7ffa338c9..8b1b550ddac 100644
--- a/manual/core/statements/per_query_keyspace/README.md
+++ b/manual/core/statements/per_query_keyspace/README.md
@@ -33,7 +33,7 @@ switching the whole session to that keyspace either. For example, you might have
setup where identical requests are executed against different keyspaces.
**This feature is only available with Cassandra 4.0 or above** ([CASSANDRA-10145]). Make sure you
-are using [native protocol](../../native_protocol/) v5 or above to connect.
+are using [native protocol](../../native_protocol/README.md) v5 or above to connect.
If you try against an older version, you will get an error:
@@ -57,7 +57,7 @@ SimpleStatement statement =
session.execute(statement);
```
-You can do this on [simple](../simple/), [prepared](../prepared) or [batch](../batch/) statements.
+You can do this on [simple](../simple/README.md), [prepared](../prepared/README.md) or [batch](../batch/README.md) statements.
If the session is connected to another keyspace, the per-query keyspace takes precedence:
diff --git a/manual/core/statements/prepared/README.md b/manual/core/statements/prepared/README.md
index 5a87b238cbc..3ce3031f0b5 100644
--- a/manual/core/statements/prepared/README.md
+++ b/manual/core/statements/prepared/README.md
@@ -101,11 +101,11 @@ the `PREPARED` response also contains useful metadata about the CQL query:
* the CQL types of the bound variables. This allows bound statements' `set` methods to perform
better checks, and fail fast (without a server round-trip) if the types are wrong.
* which bound variables are part of the partition key. This allows bound statements to automatically
- compute their [routing key](../../load_balancing/#token-aware).
+ compute their [routing key](../../load_balancing/README.md#token-aware).
* more optimizations might get added in the future. For example, [CASSANDRA-10813] suggests adding
- an "[idempotent](../../idempotence)" flag to the response.
+ an "[idempotent](../../idempotence/README.md)" flag to the response.
-If you have a unique query that is executed only once, a [simple statement](../simple/) will be more
+If you have a unique query that is executed only once, a [simple statement](../simple/README.md) will be more
efficient. But note that this should be pretty rare: most client applications typically repeat the
same queries over and over, and a parameterized version can be extracted and prepared.
@@ -150,8 +150,8 @@ Note that caching is based on:
but different consistency levels will yield two distinct prepared statements (that each produce
bound statements with their respective consistency level).
-The size of the cache is exposed as a session-level [metric](../../metrics/)
-`cql-prepared-cache-size`. The cache uses [weak values]([guava eviction]) eviction, so this
+The size of the cache is exposed as a session-level [metric](../../metrics/README.md)
+`cql-prepared-cache-size`. The cache uses [weak values][guava eviction] eviction, so this
represents the number of `PreparedStatement` instances that your application has created, and is
still holding a reference to.
@@ -217,7 +217,7 @@ parameters.
#### Unset values
-With [native protocol](../../native_protocol/) V3, all variables must be bound. With native protocol
+With [native protocol](../../native_protocol/README.md) V3, all variables must be bound. With native protocol
V4 (Cassandra 2.2 / DSE 5) or above, variables can be left unset, in which case they will be ignored
(no tombstones will be generated). If you're reusing a bound statement, you can use the `unset`
method to unset variables that were previously set:
@@ -314,14 +314,14 @@ achieve this:
|<------------------------------| |
```
-You can customize these strategies through the [configuration](../../configuration/):
+You can customize these strategies through the [configuration](../../configuration/README.md):
* `datastax-java-driver.advanced.prepared-statements.prepare-on-all-nodes` controls whether
statements are initially re-prepared on other hosts (step 1 above);
* `datastax-java-driver.advanced.prepared-statements.reprepare-on-up` controls how statements are
re-prepared on a node that comes back up (step 2 above).
-Read the [reference configuration](../../configuration/reference/) for a detailed description of each
+Read the [reference configuration](../../configuration/reference/README.md) for a detailed description of each
of those options.
### Prepared statements and schema changes
@@ -344,7 +344,7 @@ To avoid this, do not create prepared statements for `SELECT *` queries if you p
changes involving adding or dropping columns. Instead, always list all columns of interest in your
statement, i.e.: `SELECT b, c FROM foo`.
-With Cassandra 4 and [native protocol](../../native_protocol/) v5, this issue is fixed
+With Cassandra 4 and [native protocol](../../native_protocol/README.md) v5, this issue is fixed
([CASSANDRA-10786]): the server detects that the driver is operating on stale metadata and sends the
new version with the response; the driver updates its local cache transparently, and the client can
observe the new columns in the result set.
diff --git a/manual/core/statements/simple/README.md b/manual/core/statements/simple/README.md
index 13ddbb7a389..05aa2e676cd 100644
--- a/manual/core/statements/simple/README.md
+++ b/manual/core/statements/simple/README.md
@@ -63,7 +63,7 @@ client driver Cassandra
```
If you execute the same query often (or a similar query with different column values), consider a
-[prepared statement](../prepared/) instead.
+[prepared statement](../prepared/README.md) instead.
### Creating an instance
@@ -147,7 +147,7 @@ session.execute(
### Type inference
Another consequence of not parsing query strings is that the driver has to guess how to serialize
-values, based on their Java type (see the [default type mappings](../../#cql-to-java-type-mapping)).
+values, based on their Java type (see the [default type mappings](../../README.md#cql-to-java-type-mapping)).
This can be tricky, in particular for numeric types:
```java
@@ -198,7 +198,7 @@ session.execute(
.build());
```
-Or you could also use [prepared statements](../prepared/), which don't have this limitation since
+Or you could also use [prepared statements](../prepared/README.md), which don't have this limitation since
parameter types are known in advance.
[SimpleStatement]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/cql/SimpleStatement.html
diff --git a/manual/core/throttling/README.md b/manual/core/throttling/README.md
index 275c0cb5b40..e533ebcd00f 100644
--- a/manual/core/throttling/README.md
+++ b/manual/core/throttling/README.md
@@ -53,14 +53,14 @@ Note that the following requests are also affected by throttling:
* preparing a statement (either directly, or indirectly when the driver reprepares on other nodes,
or when a node comes back up -- see
- [how the driver prepares](../statements/prepared/#how-the-driver-prepares));
+ [how the driver prepares](../statements/prepared/README.md#how-the-driver-prepares));
* fetching the next page of a result set (which happens in the background when you iterate the
synchronous variant `ResultSet`).
-* fetching a [query trace](../tracing/).
+* fetching a [query trace](../tracing/README.md).
### Configuration
-Request throttling is parameterized in the [configuration](../configuration/) under
+Request throttling is parameterized in the [configuration](../configuration/README.md) under
`advanced.throttler`. There are various implementations, detailed in the following sections:
#### Pass through
@@ -77,7 +77,7 @@ This is a no-op implementation: requests are simply allowed to proceed all the t
Note that you will still hit a limit if all your connections run out of stream ids. In that case,
requests will fail with an [AllNodesFailedException], with the `getErrors()` method returning a
-[BusyConnectionException] for each node. See the [connection pooling](../pooling/) page.
+[BusyConnectionException] for each node. See the [connection pooling](../pooling/README.md) page.
#### Concurrency-based
@@ -103,8 +103,8 @@ with [BusyConnectionException] instead of being throttled. The total number of s
function of the number of connected nodes and the `connection.pool.*.size` and
`connection.max-requests-per-connection` configuration options. Keep in mind that aggressive
speculative executions and timeout options can inflate stream id consumption, so keep a safety
-margin. One good way to get this right is to track the `pool.available-streams` [metric](../metrics)
-on every node, and make sure it never reaches 0. See the [connection pooling](../pooling/) page.
+margin. One good way to get this right is to track the `pool.available-streams` [metric](../metrics/README.md)
+on every node, and make sure it never reaches 0. See the [connection pooling](../pooling/README.md) page.
#### Rate-based
@@ -129,14 +129,14 @@ does not necessarily mean that the rate is back to normal. So instead the thrott
rate periodically and dequeues when possible, this is controlled by the `drain-interval` option.
Picking the right interval is a matter of balance: too low might consume too many resources and only
dequeue a few requests at a time, but too high will delay your requests too much; start with a few
-milliseconds and use the `cql-requests` [metric](../metrics/) to check the impact on your latencies.
+milliseconds and use the `cql-requests` [metric](../metrics/README.md) to check the impact on your latencies.
Like with the concurrency-based throttler, you should make sure that your target rate is in line
with the pooling options; see the recommendations in the previous section.
### Monitoring
-Enable the following [metrics](../metrics/) to monitor how the throttler is performing:
+Enable the following [metrics](../metrics/README.md) to monitor how the throttler is performing:
```
datastax-java-driver {
diff --git a/manual/core/tracing/README.md b/manual/core/tracing/README.md
index f9beca8e49b..660ffddccba 100644
--- a/manual/core/tracing/README.md
+++ b/manual/core/tracing/README.md
@@ -42,7 +42,7 @@ results.
### Enabling tracing
Set the tracing flag on the `Statement` instance. There are various ways depending on how you build
-it (see [statements](../statements/) for more details):
+it (see [statements](../statements/README.md) for more details):
```java
// Setter-based:
diff --git a/manual/core/tuples/README.md b/manual/core/tuples/README.md
index d0684b77569..43c8f8eefae 100644
--- a/manual/core/tuples/README.md
+++ b/manual/core/tuples/README.md
@@ -32,7 +32,7 @@ Ordered set of anonymous, typed fields, e.g. `tuple`, `(1, 'a'
-----
[CQL tuples][cql_doc] are ordered sets of anonymous, typed fields. They can be used as a column type
-in tables, or a field type in [user-defined types](../udts/):
+in tables, or a field type in [user-defined types](../udts/README.md):
```
CREATE TABLE ks.collect_things (
@@ -77,7 +77,7 @@ ways to get it:
TupleType tupleType = (TupleType) ps.getVariableDefinitions().get("v").getType();
```
-* from the driver's [schema metadata](../metadata/schema/):
+* from the driver's [schema metadata](../metadata/schema/README.md):
```java
TupleType tupleType =
@@ -102,7 +102,7 @@ ways to get it:
TupleType tupleType = DataTypes.tupleOf(DataTypes.INT, DataTypes.TEXT, DataTypes.FLOAT);
```
- Note that the resulting type is [detached](../detachable_types).
+ Note that the resulting type is [detached](../detachable_types/README.md).
Once you have the type, call `newValue()` and set the fields:
diff --git a/manual/core/udts/README.md b/manual/core/udts/README.md
index a22057030ae..9a4dcc114eb 100644
--- a/manual/core/udts/README.md
+++ b/manual/core/udts/README.md
@@ -93,7 +93,7 @@ various ways to get it:
UserDefinedType udt = (UserDefinedType) ps.getVariableDefinitions().get("v").getType();
```
-* from the driver's [schema metadata](../metadata/schema/):
+* from the driver's [schema metadata](../metadata/schema/README.md):
```java
UserDefinedType udt =
@@ -113,7 +113,7 @@ Note that the driver's official API does not expose a way to build [UserDefinedT
manually. This is because the type's internal definition must precisely match the database schema;
if it doesn't (for example if the fields are not in the same order), you run the risk of inserting
corrupt data, that you won't be able to read back. There is still a way to do it with the driver,
-but it's part of the [internal API](../../api_conventions/):
+but it's part of the [internal API](../../api_conventions/README.md):
```java
// Advanced usage: make sure you understand the risks
@@ -127,7 +127,7 @@ UserDefinedType udt =
.build();
```
-Note that a manually created type is [detached](../detachable_types).
+Note that a manually created type is [detached](../detachable_types/README.md).
Once you have the type, call `newValue()` and set the fields:
diff --git a/manual/developer/README.md b/manual/developer/README.md
index b6e0bda16ed..1bef958ef7a 100644
--- a/manual/developer/README.md
+++ b/manual/developer/README.md
@@ -24,15 +24,15 @@ This section explains how driver internals work. The intended audience is:
* driver developers and contributors;
* framework authors, or architects who want to write advanced customizations and integrations.
-Most of this material will involve "internal" packages; see [API conventions](../api_conventions/)
+Most of this material will involve "internal" packages; see [API conventions](../api_conventions/README.md)
for more explanations.
-We recommend reading about the [common infrastructure](common/) first. Then the documentation goes
+We recommend reading about the [common infrastructure](common/README.md) first. Then the documentation goes
from lowest to highest level:
-* [Native protocol layer](native_protocol/): binary encoding of the TCP payloads;
-* [Netty pipeline](netty_pipeline/): networking and low-level stream management;
-* [Request execution](request_execution/): higher-level handling of user requests and responses;
-* [Administrative tasks](admin/): everything else (cluster state and metadata).
+* [Native protocol layer](native_protocol/README.md): binary encoding of the TCP payloads;
+* [Netty pipeline](netty_pipeline/README.md): networking and low-level stream management;
+* [Request execution](request_execution/README.md): higher-level handling of user requests and responses;
+* [Administrative tasks](admin/README.md): everything else (cluster state and metadata).
If you're reading this on GitHub, the `.nav` file in each directory contains a suggested order.
diff --git a/manual/developer/admin/README.md b/manual/developer/admin/README.md
index 0ebd9e2d746..bbc4b5ea18f 100644
--- a/manual/developer/admin/README.md
+++ b/manual/developer/admin/README.md
@@ -19,7 +19,7 @@ under the License.
## Administrative tasks
-Aside from the main task of [executing user requests](../request_execution), the driver also needs
+Aside from the main task of [executing user requests](../request_execution/README.md), the driver also needs
to track cluster state and metadata. This is done with a number of administrative components:
```ditaa
@@ -48,7 +48,7 @@ node info| | schema | +------------+ EventBus |
metadata changed events
```
-Note: the event bus is covered in the [common infrastructure](../common/event_bus) section.
+Note: the event bus is covered in the [common infrastructure](../common/event_bus/README.md) section.
### Control connection
@@ -74,7 +74,7 @@ writing, the session also references the control connection directly, but that's
### Metadata manager
This component is responsible for maintaining the contents of
-[session.getMetadata()](../../core/metadata/).
+[session.getMetadata()](../../core/metadata/README.md).
One big improvement in driver 4 is that the `Metadata` object is immutable and updated atomically;
this guarantees a consistent view of the cluster at a given point in time. For example, if a
@@ -85,7 +85,7 @@ keyspace name is referenced in the token map, there will always be a correspondi
managed by a `MetadataRefresh` object that computes the new metadata, along with an optional list of
events to publish on the bus (e.g. table created, keyspace removed, etc.) The new metadata is then
written back to the volatile field. `MetadataManager` follows the [confined inner
-class](../common/concurrency/#cold-path) pattern to ensure that all refreshes are applied serially,
+class](../common/concurrency/README.md#cold-path) pattern to ensure that all refreshes are applied serially,
from a single admin thread. This guarantees that two refreshes can't start from the same initial
state and overwrite each other.
@@ -323,7 +323,7 @@ service instead of relying on system tables and gossip (see
[JAVA-1082](https://datastax-oss.atlassian.net/browse/JAVA-1082)).
A custom implementation can be plugged by [extending the
-context](../common/context/#overriding-a-context-component) and overriding `buildTopologyMonitor`.
+context](../common/context/README.md#overriding-a-context-component) and overriding `buildTopologyMonitor`.
It should:
* implement the methods of `TopologyMonitor` by querying the discovery service;
@@ -338,5 +338,5 @@ information returned by the topology monitor.
It's less likely that this will be overridden directly. But the schema querying and parsing logic is
abstracted behind two factories that handle the differences between Cassandra versions:
`SchemaQueriesFactory` and `SchemaParserFactory`. These are pluggable by [extending the
-context](../common/context/#overriding-a-context-component) and overriding the corresponding
+context](../common/context/README.md#overriding-a-context-component) and overriding the corresponding
`buildXxx` methods.
diff --git a/manual/developer/common/README.md b/manual/developer/common/README.md
index 13ad8639e62..b4db64c8474 100644
--- a/manual/developer/common/README.md
+++ b/manual/developer/common/README.md
@@ -21,8 +21,8 @@ under the License.
This covers utilities or concept that are shared throughout the codebase:
-* the [context](context/) is what glues everything together, and your primary entry point to extend
+* the [context](context/README.md) is what glues everything together, and your primary entry point to extend
the driver.
-* we explain the two major approaches to deal with [concurrency](concurrency/) in the driver.
-* the [event bus](event_bus/) is used to decouple some of the internal components through
+* we explain the two major approaches to deal with [concurrency](concurrency/README.md) in the driver.
+* the [event bus](event_bus/README.md) is used to decouple some of the internal components through
asynchronous messaging.
diff --git a/manual/developer/common/concurrency/README.md b/manual/developer/common/concurrency/README.md
index fb493930d6e..9265985a316 100644
--- a/manual/developer/common/concurrency/README.md
+++ b/manual/developer/common/concurrency/README.md
@@ -97,7 +97,7 @@ fields, and methods are guaranteed to always run in isolation, eliminating subtl
### Non-blocking
Whether on the hot or cold path, internal code is almost 100% lock-free. The driver guarantees on
-lock-freedom are [detailed](../../../core/non_blocking) in the core manual.
+lock-freedom are [detailed](../../../core/non_blocking/README.md) in the core manual.
If an internal component needs to execute a query, it does so asynchronously, and registers
callbacks to process the results. Examples of this can be found in `ReprepareOnUp` and
diff --git a/manual/developer/native_protocol/README.md b/manual/developer/native_protocol/README.md
index b96553fc51b..6211443253d 100644
--- a/manual/developer/native_protocol/README.md
+++ b/manual/developer/native_protocol/README.md
@@ -186,12 +186,12 @@ The driver initializes its `FrameCodec` in `DefaultDriverContext.buildFrameCodec
### Extension points
The default frame codec can be replaced by [extending the
-context](../common/context/#overriding-a-context-component) to override `buildFrameCodec`. This
+context](../common/context/README.md#overriding-a-context-component) to override `buildFrameCodec`. This
can be used to add or remove a protocol version, or replace a particular codec.
If protocol versions change, `ProtocolVersionRegistry` will likely be affected as well.
Also, depending on the nature of the protocol changes, the driver's [request
-processors](../request_execution/#request-processors) might require some adjustments: either replace
+processors](../request_execution/README.md#request-processors) might require some adjustments: either replace
them, or introduce separate ones (possibly with new `executeXxx()` methods on a custom session
interface).
diff --git a/manual/developer/netty_pipeline/README.md b/manual/developer/netty_pipeline/README.md
index b596832e202..dea38ad307c 100644
--- a/manual/developer/netty_pipeline/README.md
+++ b/manual/developer/netty_pipeline/README.md
@@ -19,7 +19,7 @@ under the License.
## Netty pipeline
-With the [protocol layer](../native_protocol) in place, the next step is to build the logic for a
+With the [protocol layer](../native_protocol/README.md) in place, the next step is to build the logic for a
single server connection.
We use [Netty](https://netty.io/) for network I/O (to learn more about Netty, [this
@@ -81,7 +81,7 @@ See also the [Extension points](#extension-points) section below.
### FrameEncoder and FrameDecoder
This is where we integrate the protocol layer, as explained
-[here](../native_protocol/#integration-in-the-driver).
+[here](../native_protocol/README.md#integration-in-the-driver).
Unlike the other pipeline stages, we use separate handlers for incoming and outgoing messages.
@@ -121,7 +121,7 @@ with each other.
In particular, a big difference from driver 3 is that stream ids are assigned within the event loop,
instead of from client code before writing to the channel (see also [connection
-pooling](../request_execution/#connection_pooling)). `StreamIdGenerator` is not thread-safe.
+pooling](../request_execution/README.md#connection-pooling)). `StreamIdGenerator` is not thread-safe.
All communication between the handler and the outside world must be done through messages or channel
events. There are 3 exceptions to this rule: `getAvailableIds`, `getInflight` and `getOrphanIds`,
@@ -145,11 +145,11 @@ Once the initialization is complete, `ProtocolInitHandler` removes itself from t
#### NettyOptions
-The `advanced.netty` section in the [configuration](../../core/configuration/reference/) exposes a
+The `advanced.netty` section in the [configuration](../../core/configuration/reference/README.md) exposes a
few high-level options.
For more elaborate customizations, you can [extend the
-context](../common/context/#overriding-a-context-component) to plug in a custom `NettyOptions`
+context](../common/context/README.md#overriding-a-context-component) to plug in a custom `NettyOptions`
implementation. This allows you to do things such as:
* reusing existing event loops;
@@ -158,7 +158,7 @@ implementation. This allows you to do things such as:
#### SslHandlerFactory
-The [user-facing API](../../core/ssl/) (`advanced.ssl-engine-factory` in the configuration, or
+The [user-facing API](../../core/ssl/README.md) (`advanced.ssl-engine-factory` in the configuration, or
`SessionBuilder.withSslContext` / `SessionBuilder.withSslEngineFactory`) only supports Java's
default SSL implementation.
@@ -172,7 +172,7 @@ boringssl. This requires a bit of custom development against the internal API:
* the constructor will create a Netty [SslContext] with [SslContextBuilder.forClient], and store
it in a field;
* `newSslHandler` will delegate to one of the [SslContext.newHandler] methods;
-* [extend the context](../common/context/#overriding-a-context-component) and override
+* [extend the context](../common/context/README.md#overriding-a-context-component) and override
`buildSslHandlerFactory` to plug your custom implementation.
[SslContext]: https://netty.io/4.1/api/io/netty/handler/ssl/SslContext.html
diff --git a/manual/developer/request_execution/README.md b/manual/developer/request_execution/README.md
index 38a0a55fbd7..e706644bd98 100644
--- a/manual/developer/request_execution/README.md
+++ b/manual/developer/request_execution/README.md
@@ -19,7 +19,7 @@ under the License.
## Request execution
-The [Netty pipeline](../netty_pipeline/) gives us the ability to send low-level protocol messages on
+The [Netty pipeline](../netty_pipeline/README.md) gives us the ability to send low-level protocol messages on
a single connection.
The request execution layer builds upon that to:
@@ -73,7 +73,7 @@ will be explained in [Request processors](#request-processors).
```
`DefaultSession` contains the session implementation. It follows the [confined inner
-class](../common/concurrency/#cold-path) pattern to simplify concurrency.
+class](../common/concurrency/README.md#cold-path) pattern to simplify concurrency.
### Connection pooling
@@ -91,7 +91,7 @@ class](../common/concurrency/#cold-path) pattern to simplify concurrency.
```
`ChannelPool` handles the connections to a given node, for a given session. It follows the [confined
-inner class](../common/concurrency/#cold-path) pattern to simplify concurrency. There are a few
+inner class](../common/concurrency/README.md#cold-path) pattern to simplify concurrency. There are a few
differences compared to the 3.x implementation:
#### Fixed size
@@ -112,12 +112,12 @@ sales), then a manual configuration change is good enough.
To get a connection to a node, client code calls `ChannelPool.next()`. This returns the less busy
connection, based on the the `getAvailableIds()` counter exposed by
-[InFlightHandler](netty_pipeline/#in-flight-handler).
+[InFlightHandler](../netty_pipeline/README.md#inflighthandler).
If all connections are busy, there is no queuing; the driver moves to the next node immediately. The
rationale is that it's better to try another node that might be ready to reply, instead of
introducing an additional wait for each node. If the user wants queuing when all nodes are busy,
-it's better to do it at the session level with a [throttler](../../core/throttling/), which provides
+it's better to do it at the session level with a [throttler](../../core/throttling/README.md), which provides
more intuitive configuration.
Before 4.5.0, there was also no preemptive acquisition of the stream id outside of the event loop:
@@ -231,7 +231,7 @@ registry to find the processor that matches the request and result types.
A processor is responsible for:
-* converting the user request into [protocol-level messages](../native_protocol/);
+* converting the user request into [protocol-level messages](../native_protocol/README.md);
* selecting a coordinator node, and obtaining a channel from its connection pool;
* writing the request to the channel;
* handling timeouts, retries and speculative executions;
@@ -261,7 +261,7 @@ public interface CqlSession extends Session {
#### RequestProcessorRegistry
You can customize the set of request processors by [extending the
-context](../common/context/#overriding-a-context-component) and overriding
+context](../common/context/README.md#overriding-a-context-component) and overriding
`buildRequestProcessorRegistry`.
This can be used to either:
diff --git a/manual/mapper/README.md b/manual/mapper/README.md
index 27005b671ad..4996ddd4625 100644
--- a/manual/mapper/README.md
+++ b/manual/mapper/README.md
@@ -23,7 +23,7 @@ The mapper generates the boilerplate to execute queries and convert the results
application-level objects.
It is published as two artifacts: `org.apache.cassandra:java-driver-mapper-processor` and
-`org.apache.cassandra:java-driver-mapper-runtime`. See [Integration](config/) for detailed instructions
+`org.apache.cassandra:java-driver-mapper-runtime`. See [Integration](config/README.md) for detailed instructions
for different build tools.
### Quick start
@@ -75,7 +75,7 @@ takes all the fields, we have to define the no-arg constructor explicitly.
We use mapper annotations to mark the class as an entity, and indicate which field(s) correspond to
the primary key.
-More annotations are available; for more details, see [Entities](entities/).
+More annotations are available; for more details, see [Entities](entities/README.md).
#### DAO interface
@@ -103,7 +103,7 @@ public interface ProductDao {
Again, mapper annotations are used to mark the interface, and indicate what kind of request each
method should execute. You can probably guess what they are in this example.
-For the full list of available query types, see [DAOs](daos/).
+For the full list of available query types, see [DAOs](daos/README.md).
#### Mapper interface
@@ -121,14 +121,14 @@ public interface InventoryMapper {
}
```
-For more details, see [Mapper](mapper/).
+For more details, see [Mapper](mapper/README.md).
#### Generating the code
The mapper uses *annotation processing*: it hooks into the Java compiler to analyze annotations, and
generate additional classes that implement the mapping logic. Annotation processing is a common
technique in modern frameworks, and is generally well supported by build tools and IDEs; this is
-covered in detail in [Configuring the annotation processor](config/).
+covered in detail in [Configuring the annotation processor](config/README.md).
Pay attention to the compiler output: the mapper processor will sometimes generate warnings if
annotations are used incorrectly.
@@ -156,7 +156,7 @@ dao.save(new Product(UUID.randomUUID(), "Mechanical keyboard"));
### Logging
The code generated by the mapper includes logs. They are issued with SLF4J, and can be configured
-the same way as the [core driver logs](../core/logging/).
+the same way as the [core driver logs](../core/logging/README.md).
They can help you figure out which queries the mapper is generating under the hood, for example:
diff --git a/manual/mapper/config/README.md b/manual/mapper/config/README.md
index 1e4f9981306..698aa3e150d 100644
--- a/manual/mapper/config/README.md
+++ b/manual/mapper/config/README.md
@@ -74,7 +74,7 @@ configuration (make sure you use version 3.5 or higher):
```
-Alternatively (e.g. if you are using the [BOM](../../core/bom/)), you may also declare the processor
+Alternatively (e.g. if you are using the [BOM](../../core/bom/README.md)), you may also declare the processor
as a regular dependency in the "provided" scope:
```xml
@@ -128,7 +128,7 @@ You will find the generated files in `build/generated/sources/annotationProcesso
### Integration with other languages and libraries
-* [Kotlin](kotlin/)
-* [Lombok](lombok/)
-* [Java 14 records](record/)
-* [Scala](scala/)
+* [Kotlin](kotlin/README.md)
+* [Lombok](lombok/README.md)
+* [Java 14 records](record/README.md)
+* [Scala](scala/README.md)
diff --git a/manual/mapper/config/kotlin/README.md b/manual/mapper/config/kotlin/README.md
index a78bf04fb79..3f5e5205bdc 100644
--- a/manual/mapper/config/kotlin/README.md
+++ b/manual/mapper/config/kotlin/README.md
@@ -27,7 +27,7 @@ We have a full example at [DataStax-Examples/object-mapper-jvm/kotlin].
### Writing the model
You can use Kotlin [data classes] for your entities. Data classes are usually
-[immutable](../../entities/#mutability), but you don't need to declare that explicitly with
+[immutable](../../entities/README.md#mutability), but you don't need to declare that explicitly with
[@PropertyStrategy]: the mapper detects that it's processing Kotlin code, and will assume `mutable =
false` by default:
@@ -46,10 +46,10 @@ declare a default value for every component in order to generate a no-arg constr
data class Product(@PartitionKey var id: Int? = null, var description: String? = null)
```
-All of the [property annotations](../../entities/#property-annotations) can be declared directly on
+All of the [property annotations](../../entities/README.md#property-annotations) can be declared directly on
the components.
-If you want to take advantage of [null saving strategies](../../daos/null_saving/), your components
+If you want to take advantage of [null saving strategies](../../daos/null_saving/README.md), your components
should be nullable.
The other mapper interfaces are direct translations of the Java versions:
diff --git a/manual/mapper/config/record/README.md b/manual/mapper/config/record/README.md
index 95530d52742..76bc27283bb 100644
--- a/manual/mapper/config/record/README.md
+++ b/manual/mapper/config/record/README.md
@@ -36,7 +36,7 @@ Annotate your records like regular classes:
record Product(@PartitionKey int id, String description) {}
```
-Records are immutable and use the [fluent getter style](../../entities#getter-style), but you don't
+Records are immutable and use the [fluent getter style](../../entities/README.md#accessor-styles), but you don't
need to declare that explicitly with [@PropertyStrategy]: the mapper detects when it's processing a
record, and will assume `mutable = false, getterStyle = FLUENT` by default.
diff --git a/manual/mapper/config/scala/README.md b/manual/mapper/config/scala/README.md
index 2cb75273d0b..d17fedbc445 100644
--- a/manual/mapper/config/scala/README.md
+++ b/manual/mapper/config/scala/README.md
@@ -38,7 +38,7 @@ case class UserVideo(@(PartitionKey@field) userid: UUID,
previewImageLocation: String)
```
-Case classes are immutable and use the [fluent getter style](../../entities#getter-style), but you
+Case classes are immutable and use the [fluent getter style](../../entities/README.md#accessor-styles), but you
don't need to declare that explicitly with [@PropertyStrategy]: the mapper detects when it's
processing a case class, and will assume `mutable = false, getterStyle = FLUENT` by default.
diff --git a/manual/mapper/daos/README.md b/manual/mapper/daos/README.md
index d12172bf056..d069f2c9985 100644
--- a/manual/mapper/daos/README.md
+++ b/manual/mapper/daos/README.md
@@ -32,7 +32,7 @@ Interface annotated with [@Dao].
-----
A DAO is an interface that defines a set of query methods. In general, those queries will relate to
-the same [entity](../entities/) (although that is not a requirement).
+the same [entity](../entities/README.md) (although that is not a requirement).
It must be annotated with [@Dao]:
@@ -55,22 +55,22 @@ public interface ProductDao {
To add queries, define methods on your interface and mark them with one of the following
annotations:
-* [@Delete](delete/)
-* [@GetEntity](getentity/)
-* [@Insert](insert/)
-* [@Query](query/)
-* [@QueryProvider](queryprovider/)
-* [@Select](select/)
-* [@SetEntity](setentity/)
-* [@Update](update/)
-* [@Increment](increment/)
+* [@Delete](delete/README.md)
+* [@GetEntity](getentity/README.md)
+* [@Insert](insert/README.md)
+* [@Query](query/README.md)
+* [@QueryProvider](queryprovider/README.md)
+* [@Select](select/README.md)
+* [@SetEntity](setentity/README.md)
+* [@Update](update/README.md)
+* [@Increment](increment/README.md)
The methods can have any name. The allowed parameters and return type are specific to each
annotation.
### Runtime usage
-To obtain a DAO instance, use a [factory method](../mapper/#dao-factory-methods) on the mapper
+To obtain a DAO instance, use a [factory method](../mapper/README.md#dao-factory-methods) on the mapper
interface.
```java
@@ -171,4 +171,4 @@ To control how the hierarchy is scanned, annotate interfaces with [@HierarchySca
[@DaoFactory]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/mapper/annotations/DaoFactory.html
[@DefaultNullSavingStrategy]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/mapper/annotations/DefaultNullSavingStrategy.html
[@HierarchyScanStrategy]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/mapper/annotations/HierarchyScanStrategy.html
-[Entity Inheritance]: ../entities/#inheritance
+[Entity Inheritance]: ../entities/README.md#inheritance
diff --git a/manual/mapper/daos/custom_types/README.md b/manual/mapper/daos/custom_types/README.md
index 19f689655a7..d1235a02d4a 100644
--- a/manual/mapper/daos/custom_types/README.md
+++ b/manual/mapper/daos/custom_types/README.md
@@ -20,7 +20,7 @@ under the License.
## Custom result types
The mapper supports a pre-defined set of built-in types for DAO method results. For example, a
-[Select](../select/#return-type) method can return a single entity, an asynchronous
+[Select](../select/README.md#return-type) method can return a single entity, an asynchronous
`CompletionStage`, a `ReactiveResultSet`, etc.
Sometimes it's convenient to use your own types. For example if you use a specific Reactive Streams
diff --git a/manual/mapper/daos/delete/README.md b/manual/mapper/daos/delete/README.md
index e67ecdc8a6e..c7b3453bd6b 100644
--- a/manual/mapper/daos/delete/README.md
+++ b/manual/mapper/daos/delete/README.md
@@ -19,7 +19,7 @@ under the License.
## Delete methods
-Annotate a DAO method with [@Delete] to generate a query that deletes an [Entity](../../entities):
+Annotate a DAO method with [@Delete] to generate a query that deletes an [Entity](../../entities/README.md):
```java
@Dao
@@ -48,7 +48,7 @@ The method can operate on:
```
In this case, the parameters must match the types of the [primary key
- columns](../../entities/#primary-key-columns), in the exact order (as defined by the
+ columns](../../entities/README.md#primary-key-columns), in the exact order (as defined by the
[@PartitionKey] and [@ClusteringColumn] annotations). The parameter names don't necessarily need
to match the names of the columns.
@@ -99,7 +99,7 @@ void deleteIfDescriptionMatches(UUID productId, String expectedDescription);
A `Function` or `UnaryOperator`
can be added as the **last** parameter. It will be applied to the statement before execution. This
allows you to customize certain aspects of the request (page size, timeout, etc) at runtime. See
-[statement attributes](../statement_attributes/).
+[statement attributes](../statement_attributes/README.md).
### Return type
@@ -154,7 +154,7 @@ The method can return:
ReactiveResultSet deleteReactive(Product product);
```
-* a [custom type](../custom_types).
+* a [custom type](../custom_types/README.md).
Note that you can also return a boolean or result set for non-conditional queries, but there's no
practical purpose for that since those queries always return `wasApplied = true` and an empty result
@@ -162,13 +162,13 @@ set.
### Target keyspace and table
-If a keyspace was specified [when creating the DAO](../../mapper/#dao-factory-methods), then the
+If a keyspace was specified [when creating the DAO](../../mapper/README.md#dao-factory-methods), then the
generated query targets that keyspace. Otherwise, it doesn't specify a keyspace, and will only work
if the mapper was built from a session that has a [default keyspace] set.
If a table was specified when creating the DAO, then the generated query targets that table.
Otherwise, it uses the default table name for the entity (which is determined by the name of the
-entity class and the [naming strategy](../../entities/#naming-strategy)).
+entity class and the [naming strategy](../../entities/README.md#naming-strategy)).
[default keyspace]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/session/SessionBuilder.html#withKeyspace-com.datastax.oss.driver.api.core.CqlIdentifier-
[AsyncResultSet]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/cql/AsyncResultSet.html
diff --git a/manual/mapper/daos/getentity/README.md b/manual/mapper/daos/getentity/README.md
index de9a530b558..bf8ff80a232 100644
--- a/manual/mapper/daos/getentity/README.md
+++ b/manual/mapper/daos/getentity/README.md
@@ -20,7 +20,7 @@ under the License.
## GetEntity methods
Annotate a DAO method with [@GetEntity] to convert a core driver data structure into one or more
-[Entities](../../entities):
+[Entities](../../entities/README.md):
```java
@Dao
diff --git a/manual/mapper/daos/increment/README.md b/manual/mapper/daos/increment/README.md
index 44b017be2e1..6559861de6e 100644
--- a/manual/mapper/daos/increment/README.md
+++ b/manual/mapper/daos/increment/README.md
@@ -50,10 +50,10 @@ public interface VotesDao {
The entity class must be specified with `entityClass` in the annotation.
-The method's parameters must start with the [full primary key](../../entities/#primary-key-columns),
+The method's parameters must start with the [full primary key](../../entities/README.md#primary-key-columns),
in the exact order (as defined by the [@PartitionKey] and [@ClusteringColumn] annotations in the
entity class). The parameter names don't necessarily need to match the names of the columns, but the
-types must match. Unlike other methods like [@Select](../select/) or [@Delete](../delete/), counter
+types must match. Unlike other methods like [@Select](../select/README.md) or [@Delete](../delete/README.md), counter
updates cannot operate on a whole partition, they need to target exactly one row; so all the
partition key and clustering columns must be specified.
@@ -71,13 +71,13 @@ void incrementUpVotes(int articleId, @CqlName("up_votes") long foobar);
When you invoke the method, each parameter value is interpreted as a **delta** that will be applied
to the counter. In other words, if you pass 1, the counter will be incremented by 1. Negative values
are allowed. If you are using Cassandra 2.2 or above, you can use `Long` and pass `null` for some of
-the parameters, they will be ignored (following [NullSavingStrategy#DO_NOT_SET](../null_saving/)
+the parameters, they will be ignored (following [NullSavingStrategy#DO_NOT_SET](../null_saving/README.md)
semantics). If you are using Cassandra 2.1, `null` values will trigger a runtime error.
A `Function` or `UnaryOperator`
can be added as the **last** parameter. It will be applied to the statement before execution. This
allows you to customize certain aspects of the request (page size, timeout, etc) at runtime. See
-[statement attributes](../statement_attributes/).
+[statement attributes](../statement_attributes/README.md).
### Return type
@@ -86,7 +86,7 @@ The method can return `void`, a void [CompletionStage] or [CompletableFuture], o
### Target keyspace and table
-If a keyspace was specified [when creating the DAO](../../mapper/#dao-factory-methods), then the
+If a keyspace was specified [when creating the DAO](../../mapper/README.md#dao-factory-methods), then the
generated query targets that keyspace. Otherwise, it doesn't specify a keyspace, and will only work
if the mapper was built from a session that has a [default keyspace] set.
diff --git a/manual/mapper/daos/insert/README.md b/manual/mapper/daos/insert/README.md
index b90ffa33a32..02dbc59227c 100644
--- a/manual/mapper/daos/insert/README.md
+++ b/manual/mapper/daos/insert/README.md
@@ -19,7 +19,7 @@ under the License.
## Insert methods
-Annotate a DAO method with [@Insert] to generate a query that inserts an [Entity](../../entities):
+Annotate a DAO method with [@Insert] to generate a query that inserts an [Entity](../../entities/README.md):
```java
@Dao
@@ -41,13 +41,13 @@ corresponding additional parameters (same name, and a compatible Java type):
void insertWithTtl(Product product, int ttl);
```
-The annotation can define a [null saving strategy](../null_saving/) that applies to the properties
+The annotation can define a [null saving strategy](../null_saving/README.md) that applies to the properties
of the entity to insert.
A `Function` or `UnaryOperator`
can be added as the **last** parameter. It will be applied to the statement before execution. This
allows you to customize certain aspects of the request (page size, timeout, etc) at runtime. See
-[statement attributes](../statement_attributes/).
+[statement attributes](../statement_attributes/README.md).
### Return type
@@ -115,17 +115,17 @@ The method can return:
ReactiveResultSet insertReactive(Product product);
```
-* a [custom type](../custom_types).
+* a [custom type](../custom_types/README.md).
### Target keyspace and table
-If a keyspace was specified [when creating the DAO](../../mapper/#dao-factory-methods), then the
+If a keyspace was specified [when creating the DAO](../../mapper/README.md#dao-factory-methods), then the
generated query targets that keyspace. Otherwise, it doesn't specify a keyspace, and will only work
if the mapper was built from a session that has a [default keyspace] set.
If a table was specified when creating the DAO, then the generated query targets that table.
Otherwise, it uses the default table name for the entity (which is determined by the name of the
-entity class and the [naming strategy](../../entities/#naming-strategy)).
+entity class and the [naming strategy](../../entities/README.md#naming-strategy)).
[default keyspace]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/session/SessionBuilder.html#withKeyspace-com.datastax.oss.driver.api.core.CqlIdentifier-
[@Insert]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/mapper/annotations/Insert.html
diff --git a/manual/mapper/daos/null_saving/README.md b/manual/mapper/daos/null_saving/README.md
index eed98934356..25d9985243b 100644
--- a/manual/mapper/daos/null_saving/README.md
+++ b/manual/mapper/daos/null_saving/README.md
@@ -37,7 +37,7 @@ Two strategies are available:
update and the column previously had another value, it won't be overwritten.
Note that unset values ([CASSANDRA-7304]) are only supported with [native
- protocol](../../../core/native_protocol/) v4 (Cassandra 2.2) or above . If you try to use this
+ protocol](../../../core/native_protocol/README.md) v4 (Cassandra 2.2) or above . If you try to use this
strategy with a lower Cassandra version, the mapper will throw an [MapperException] when you try
to access the corresponding DAO.
@@ -60,12 +60,12 @@ import static com.datastax.oss.driver.api.mapper.entity.saving.NullSavingStrateg
void update(Product product);
```
-This applies to [@Insert](../insert/), [@Query](../query/), [@SetEntity](../setentity/) and
-[@Update](../update/) (other method types don't need it since they don't write data).
+This applies to [@Insert](../insert/README.md), [@Query](../query/README.md), [@SetEntity](../setentity/README.md) and
+[@Update](../update/README.md) (other method types don't need it since they don't write data).
### DAO level
-Annotate your [DAO](../../daos/) interface with [@DefaultNullSavingStrategy]. Any method that does
+Annotate your [DAO](../README.md) interface with [@DefaultNullSavingStrategy]. Any method that does
not explicitly define its strategy inherits the DAO-level one:
```java
diff --git a/manual/mapper/daos/query/README.md b/manual/mapper/daos/query/README.md
index a11753da880..bc56f17a91b 100644
--- a/manual/mapper/daos/query/README.md
+++ b/manual/mapper/daos/query/README.md
@@ -41,13 +41,13 @@ placeholders: same name and a compatible Java type.
long countByIdAndYear(int id, int year);
```
-The annotation can define a [null saving strategy](../null_saving/) that applies to the method
+The annotation can define a [null saving strategy](../null_saving/README.md) that applies to the method
parameters.
A `Function` or `UnaryOperator`
can be added as the **last** parameter. It will be applied to the statement before execution. This
allows you to customize certain aspects of the request (page size, timeout, etc) at runtime. See
-[statement attributes](../statement_attributes/).
+[statement attributes](../statement_attributes/README.md).
### Return type
@@ -65,7 +65,7 @@ The method can return:
* a [Row]. This means the result is not converted, the mapper only extracts the first row of the
result set and returns it. The method will return `null` if the result set is empty.
-* a single instance of an [Entity](../../entities/) class. The method will extract the first row and
+* a single instance of an [Entity](../../entities/README.md) class. The method will extract the first row and
convert it, or return `null` if the result set is empty.
* an [Optional] of an entity class. The method will extract the first row and convert
@@ -87,14 +87,14 @@ The method can return:
* a [ReactiveResultSet], or a [MappedReactiveResultSet] of the entity class.
-* a [custom type](../custom_types).
+* a [custom type](../custom_types/README.md).
### Target keyspace and table
To avoid hard-coding the keyspace and table name, the query string supports 3 additional
placeholders: `${keyspaceId}`, `${tableId}` and `${qualifiedTableId}`. They get substituted at DAO
initialization time, with the [keyspace and table that the DAO was built
-with](../../mapper/#dao-factory-methods).
+with](../../mapper/README.md#dao-factory-methods).
For example, given the following:
diff --git a/manual/mapper/daos/queryprovider/README.md b/manual/mapper/daos/queryprovider/README.md
index 593a3a6b1a4..0a6b1d2649e 100644
--- a/manual/mapper/daos/queryprovider/README.md
+++ b/manual/mapper/daos/queryprovider/README.md
@@ -43,7 +43,7 @@ Use this for requests that can't be expressed as static query strings. For examp
* if `day` is null, we query for the whole month: `WHERE id = ? AND month = ?`
* if `month` is also null, we query the whole partition: `WHERE id = ?`
-We assume that you've already written a corresponding [entity](../../entities/) class:
+We assume that you've already written a corresponding [entity](../../entities/README.md) class:
```java
@Entity
@@ -72,7 +72,7 @@ additional [EntityHelper] argument for each provided entity class. We specified
`SensorReading.class` so our argument types are `(MapperContext, EntityHelper)`.
An entity helper is a utility type generated by the mapper. One thing it can do is construct query
-templates (with the [query builder](../../../query_builder/)). We want to retrieve entities so we
+templates (with the [query builder](../../../query_builder/README.md)). We want to retrieve entities so we
use `selectStart()`, chain a first WHERE clause for the id (which is always present), and store the
result in a field for later use:
diff --git a/manual/mapper/daos/select/README.md b/manual/mapper/daos/select/README.md
index fb6c4ca2077..7e8913e28dd 100644
--- a/manual/mapper/daos/select/README.md
+++ b/manual/mapper/daos/select/README.md
@@ -20,7 +20,7 @@ under the License.
## Select methods
Annotate a DAO method with [@Select] to generate a query that selects one or more rows, and maps
-them to [Entities](../../entities):
+them to [Entities](../../entities/README.md):
```java
@Dao
@@ -34,7 +34,7 @@ public interface ProductDao {
If the annotation doesn't have a [customWhereClause()], the mapper defaults to a selection by
primary key (partition key + clustering columns). The method's parameters must match the types of
-the [primary key columns](../../entities/#primary-key-columns), in the exact order (as defined by
+the [primary key columns](../../entities/README.md#primary-key-columns), in the exact order (as defined by
the [@PartitionKey] and [@ClusteringColumn] annotations). The parameter names don't necessarily need
to match the names of the columns.
@@ -85,7 +85,7 @@ whose values will be provided through additional method parameters. Note that it
possible to determine if a parameter is a primary key component or a placeholder value; therefore
the rule is that **if your method takes a partial primary key, the first parameter that is not a
primary key component must be explicitly annotated with
-[@CqlName](../../entities/#user-provided-names)**. For example if the primary key is `((day int,
+[@CqlName](../../entities/README.md#user-provided-names)**. For example if the primary key is `((day int,
hour int, minute int), ts timestamp)`:
```java
@@ -97,7 +97,7 @@ PagingIterable findDailySales(int day, @CqlName("l") int l);
A `Function` or `UnaryOperator`
can be added as the **last** parameter. It will be applied to the statement before execution. This
allows you to customize certain aspects of the request (page size, timeout, etc) at runtime. See
-[statement attributes](../statement_attributes/).
+[statement attributes](../statement_attributes/README.md).
### Return type
@@ -167,17 +167,17 @@ In all cases, the method can return:
MappedReactiveResultSet findByDescriptionReactive(String searchString);
```
-* a [custom type](../custom_types).
+* a [custom type](../custom_types/README.md).
### Target keyspace and table
-If a keyspace was specified [when creating the DAO](../../mapper/#dao-factory-methods), then the
+If a keyspace was specified [when creating the DAO](../../mapper/README.md#dao-factory-methods), then the
generated query targets that keyspace. Otherwise, it doesn't specify a keyspace, and will only work
if the mapper was built from a session that has a [default keyspace] set.
If a table was specified when creating the DAO, then the generated query targets that table.
Otherwise, it uses the default table name for the entity (which is determined by the name of the
-entity class and the [naming strategy](../../entities/#naming-strategy)).
+entity class and the [naming strategy](../../entities/README.md#naming-strategy)).
[default keyspace]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/session/SessionBuilder.html#withKeyspace-com.datastax.oss.driver.api.core.CqlIdentifier-
[@ClusteringColumn]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/mapper/annotations/ClusteringColumn.html
diff --git a/manual/mapper/daos/setentity/README.md b/manual/mapper/daos/setentity/README.md
index eeb7957f62e..0fc7a8c014b 100644
--- a/manual/mapper/daos/setentity/README.md
+++ b/manual/mapper/daos/setentity/README.md
@@ -20,7 +20,7 @@ under the License.
## SetEntity methods
Annotate a DAO method with [@SetEntity] to fill a core driver data structure from an
-[Entity](../../entities):
+[Entity](../../entities/README.md):
```java
public interface ProductDao {
@@ -97,7 +97,7 @@ The method must have two parameters: one is the entity instance, the other must
The order of the parameters does not matter.
-The annotation can define a [null saving strategy](../null_saving/) that applies to the properties
+The annotation can define a [null saving strategy](../null_saving/README.md) that applies to the properties
of the object to set. This is only really useful with bound statements (or bound statement
builders): if the target is a [UdtValue], the driver sends null fields in the serialized form
anyway, so both strategies are equivalent.
diff --git a/manual/mapper/daos/statement_attributes/README.md b/manual/mapper/daos/statement_attributes/README.md
index f772df36775..ded66bcbbd7 100644
--- a/manual/mapper/daos/statement_attributes/README.md
+++ b/manual/mapper/daos/statement_attributes/README.md
@@ -19,8 +19,8 @@ under the License.
## Statement attributes
-The [@Delete](../delete/), [@Insert](../insert/), [@Query](../query/), [@Select](../select/) and
-[@Update](../update/) annotations allow you to control some aspects of the execution of the
+The [@Delete](../delete/README.md), [@Insert](../insert/README.md), [@Query](../query/README.md), [@Select](../select/README.md) and
+[@Update](../update/README.md) annotations allow you to control some aspects of the execution of the
underlying statement, such as the consistency level, timeout, etc.
### As a parameter
diff --git a/manual/mapper/daos/update/README.md b/manual/mapper/daos/update/README.md
index 87e9286c800..89f47f7a31c 100644
--- a/manual/mapper/daos/update/README.md
+++ b/manual/mapper/daos/update/README.md
@@ -20,7 +20,7 @@ under the License.
## Update methods
Annotate a DAO method with [@Update] to generate a query that updates one or more
-[entities](../../entities):
+[entities](../../entities/README.md):
```java
@Dao
@@ -77,7 +77,7 @@ with `customIfClause` (if both are set, the mapper processor will generate a com
boolean updateIfExists(Product product);
```
-The annotation can define a [null saving strategy](../null_saving/) that applies to the properties
+The annotation can define a [null saving strategy](../null_saving/README.md) that applies to the properties
of the entity to update. This allows you to implement partial updates, by passing a "template"
entity that only contains the properties you want to modify:
@@ -95,7 +95,7 @@ dao.updateWhereIdIn(template, 42, 43); // Will only update 'description' on the
A `Function` or `UnaryOperator`
can be added as the **last** parameter. It will be applied to the statement before execution. This
allows you to customize certain aspects of the request (page size, timeout, etc) at runtime. See
-[statement attributes](../statement_attributes/).
+[statement attributes](../statement_attributes/README.md).
### Return type
@@ -150,11 +150,11 @@ The method can return:
ReactiveResultSet updateReactive(Product product);
```
-* a [custom type](../custom_types).
+* a [custom type](../custom_types/README.md).
### Target keyspace and table
-If a keyspace was specified [when creating the DAO](../../mapper/#dao-factory-methods), then the
+If a keyspace was specified [when creating the DAO](../../mapper/README.md#dao-factory-methods), then the
generated query targets that keyspace. Otherwise, it doesn't specify a keyspace, and will only work
if the mapper was built from a session that has a [default keyspace] set.
diff --git a/manual/mapper/entities/README.md b/manual/mapper/entities/README.md
index 978c781245f..de46fa4963b 100644
--- a/manual/mapper/entities/README.md
+++ b/manual/mapper/entities/README.md
@@ -38,8 +38,8 @@ POJO annotated with [@Entity], must expose a no-arg constructor.
-----
-An entity is a Java class that will be mapped to a Cassandra table or [UDT](../../core/udts).
-Entities are used as arguments or return types of [DAO](../daos/) methods; they can also be nested
+An entity is a Java class that will be mapped to a Cassandra table or [UDT](../../core/udts/README.md).
+Entities are used as arguments or return types of [DAO](../daos/README.md) methods; they can also be nested
inside other entities (to map UDT columns).
In order to be detected by the mapper, the class must be annotated with [@Entity]:
@@ -280,7 +280,7 @@ private int day;
```
This information is used by some of the DAO method annotations; for example,
-[@Select](../daos/select/)'s default behavior is to generate a selection by primary key.
+[@Select](../daos/select/README.md)'s default behavior is to generate a selection by primary key.
#### Computed properties
diff --git a/manual/mapper/mapper/README.md b/manual/mapper/mapper/README.md
index 752424c9a3b..fc7d6218309 100644
--- a/manual/mapper/mapper/README.md
+++ b/manual/mapper/mapper/README.md
@@ -30,7 +30,7 @@ Interface annotated with [@Mapper], entry point to mapper features.
-----
The mapper interface is the top-level entry point to mapping features. It wraps a core driver
-session, and acts as a factory of [DAO](../daos/) objects that will be used to execute requests.
+session, and acts as a factory of [DAO](../daos/README.md) objects that will be used to execute requests.
It must be annotated with [@Mapper]:
@@ -58,7 +58,7 @@ public interface InventoryMapper {
```
The builder allows you to create a mapper instance, by wrapping a core `CqlSession` (if you need
-more details on how to create a session, refer to the [core driver documentation](../../core/)).
+more details on how to create a session, refer to the [core driver documentation](../../core/README.md)).
```java
CqlSession session = CqlSession.builder().build();
@@ -161,7 +161,7 @@ ProductDao dao3 = inventoryMapper.productDao("keyspace3", "table3");
* `dao1.findById` executes the query `SELECT ... FROM product WHERE id = ?`. No table name was
specified for the DAO, so it uses the default name for the `Product` entity (which in this case is
- the entity name converted with the default [naming strategy](../entities/#naming-strategy)). No
+ the entity name converted with the default [naming strategy](../entities/README.md#naming-strategy)). No
keyspace was specified either, so the table is unqualified, and this DAO will only work with a
session that was built with a default keyspace:
@@ -178,12 +178,12 @@ ProductDao dao3 = inventoryMapper.productDao("keyspace3", "table3");
= ?`.
The DAO's keyspace and table can also be injected into custom query strings; see [Query
-methods](../daos/query/).
+methods](../daos/query/README.md).
#### Execution profile
Similarly, a DAO can be parameterized to use a particular [configuration
-profile](../../core/configuration/#execution-profiles):
+profile](../../core/configuration/README.md#execution-profiles):
```java
@Mapper
@@ -212,7 +212,7 @@ ProductDao dao2 = inventoryMapper.productDao("keyspace2", "product");
```
For each entity referenced in the DAO, the mapper tries to find a schema element with the
-corresponding name (according to the [naming strategy](../entities/#naming-strategy)). It tries
+corresponding name (according to the [naming strategy](../entities/README.md#naming-strategy)). It tries
tables first, then falls back to UDTs if there is no match. You can speed up this process by
providing a hint:
@@ -228,12 +228,12 @@ public class Address { ... }
The following checks are then performed:
* for each entity field, the database table or UDT must contain a column with the corresponding name
- (according to the [naming strategy](../entities/#naming-strategy)).
+ (according to the [naming strategy](../entities/README.md#naming-strategy)).
* the types must be compatible, either according to the [default type
- mappings](../../core/#cql-to-java-type-mapping), or via a [custom
- codec](../../core/custom_codecs/) registered with the session.
+ mappings](../../core/README.md#cql-to-java-type-mapping), or via a [custom
+ codec](../../core/custom_codecs/README.md) registered with the session.
* additionally, if the target element is a table, the primary key must be [properly
- annotated](../entities/#primary-key-columns) in the entity.
+ annotated](../entities/README.md#primary-key-columns) in the entity.
If any of those steps fails, an `IllegalArgumentException` is thrown.
diff --git a/manual/osgi/README.md b/manual/osgi/README.md
index 92cd4625b68..d35c6c66088 100644
--- a/manual/osgi/README.md
+++ b/manual/osgi/README.md
@@ -29,7 +29,7 @@ valid OSGi bundles:
Note: some of the driver dependencies are not valid OSGi bundles. Most of them are optional, and the
driver can work properly without them (see the
-[Integration>Driver dependencies](../core/integration/#driver-dependencies) section for more
+[Integration>Driver dependencies](../core/integration/README.md#driver-dependencies) section for more
details); in such cases, the corresponding packages are declared with optional resolution in
`Import-Package` directives. However, if you need to access such packages in an OSGi container you
MUST wrap the corresponding jar in a valid OSGi bundle and make it available for provisioning to the
@@ -40,7 +40,7 @@ OSGi runtime.
`java-driver-core-shaded` shares the same bundle name as `java-driver-core`
(`com.datastax.oss.driver.core`). It can be used as a drop-in replacement in cases where you have
an explicit version of dependency in your project different than that of the driver's. Refer to
-[shaded jar](../core/shaded_jar/) for more information.
+[shaded jar](../core/shaded_jar/README.md) for more information.
## Using a custom `ClassLoader`
@@ -136,7 +136,7 @@ The above configuration will honor all programmatic settings, but will look for
## What does the "Error loading libc" DEBUG message mean?
The driver is able to perform native system calls through [JNR] in some cases, for example to
-achieve microsecond resolution when [generating timestamps](../core/query_timestamps/).
+achieve microsecond resolution when [generating timestamps](../core/query_timestamps/README.md).
Unfortunately, some of the JNR artifacts available from Maven are not valid OSGi bundles and cannot
be used in OSGi applications.
@@ -154,7 +154,7 @@ starting the driver:
system clock
-[driver configuration]: ../core/configuration
+[driver configuration]: ../core/configuration/README.md
[OSGi]:https://www.osgi.org
[JNR]: https://github.com/jnr/jnr-posix
[withClassLoader()]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/core/session/SessionBuilder.html#withClassLoader-java.lang.ClassLoader-
diff --git a/manual/query_builder/README.md b/manual/query_builder/README.md
index d1932b329e7..a5121f9914a 100644
--- a/manual/query_builder/README.md
+++ b/manual/query_builder/README.md
@@ -81,7 +81,7 @@ Select select =
```
When your query is complete, you can either extract a raw query string, or turn it into a
-[simple statement](../core/statements/simple) (or its builder):
+[simple statement](../core/statements/simple/README.md) (or its builder):
```java
String cql = select.asCql();
@@ -137,7 +137,7 @@ On the downside, immutability means that the query builder creates lots of short
Modern garbage collectors are good at handling that, but still we recommend that you **avoid using
the query builder in your hot path**:
-* favor [bound statements](../core/statements/prepared) for queries that are used often. You can
+* favor [bound statements](../core/statements/prepared/README.md) for queries that are used often. You can
still use the query builder and prepare the result:
```java
@@ -157,7 +157,7 @@ the query builder in your hot path**:
All fluent API methods use [CqlIdentifier] for schema element names (keyspaces, tables, columns...).
But, for convenience, there are also `String` overloads that take the CQL form (as see [Case
-sensitivity](../case_sensitivity) for more explanations).
+sensitivity](../case_sensitivity/README.md) for more explanations).
For conciseness, we'll use the string-based versions for the examples in this manual.
@@ -219,17 +219,17 @@ a better alternative.
For a complete tour of the API, browse the child pages in this manual:
* statement types:
- * [SELECT](select/)
- * [INSERT](insert/)
- * [UPDATE](update/)
- * [DELETE](delete/)
- * [TRUNCATE](truncate/)
- * [Schema builder](schema/) (for DDL statements such as CREATE TABLE, etc.)
+ * [SELECT](select/README.md)
+ * [INSERT](insert/README.md)
+ * [UPDATE](update/README.md)
+ * [DELETE](delete/README.md)
+ * [TRUNCATE](truncate/README.md)
+ * [Schema builder](schema/README.md) (for DDL statements such as CREATE TABLE, etc.)
* common topics:
- * [Relations](relation/)
- * [Conditions](condition/)
- * [Terms](term/)
- * [Idempotence](idempotence/)
+ * [Relations](relation/README.md)
+ * [Conditions](condition/README.md)
+ * [Terms](term/README.md)
+ * [Idempotence](idempotence/README.md)
[QueryBuilder]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/querybuilder/QueryBuilder.html
[SchemaBuilder]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/querybuilder/SchemaBuilder.html
diff --git a/manual/query_builder/condition/README.md b/manual/query_builder/condition/README.md
index 1a6a37eb2ef..bb8b902bfe9 100644
--- a/manual/query_builder/condition/README.md
+++ b/manual/query_builder/condition/README.md
@@ -19,8 +19,8 @@ under the License.
## Conditions
-A condition is a clause that appears after the IF keyword in a conditional [UPDATE](../update/) or
-[DELETE](../delete/) statement.
+A condition is a clause that appears after the IF keyword in a conditional [UPDATE](../update/README.md) or
+[DELETE](../delete/README.md) statement.
The easiest way to add a condition is with an `ifXxx` method in the fluent API:
@@ -59,7 +59,7 @@ deleteFrom("user")
```
Conditions are composed of a left operand, an operator, and a right-hand-side
-[term](../term/).
+[term](../term/README.md).
### Simple columns
diff --git a/manual/query_builder/delete/README.md b/manual/query_builder/delete/README.md
index 8e97920ae9f..3f62c792bd1 100644
--- a/manual/query_builder/delete/README.md
+++ b/manual/query_builder/delete/README.md
@@ -21,7 +21,7 @@ under the License.
To start a DELETE query, use one of the `deleteFrom` methods in [QueryBuilder]. There are several
variants depending on whether your table name is qualified, and whether you use
-[identifiers](../../case_sensitivity/) or raw strings:
+[identifiers](../../case_sensitivity/README.md) or raw strings:
```java
import static com.datastax.oss.driver.api.querybuilder.QueryBuilder.*;
@@ -134,7 +134,7 @@ SimpleStatement statement = deleteFrom("user").whereColumn("k").isEqualTo(bindMa
```
Relations are a common feature used by many types of statements, so they have a
-[dedicated page](../relation) in this manual.
+[dedicated page](../relation/README.md) in this manual.
### Conditions
@@ -158,7 +158,7 @@ deleteFrom("user")
```
Conditions are a common feature used by UPDATE and DELETE, so they have a
-[dedicated page](../condition) in this manual.
+[dedicated page](../condition/README.md) in this manual.
[QueryBuilder]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/querybuilder/QueryBuilder.html
[Selector]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/querybuilder/select/Selector.html
diff --git a/manual/query_builder/idempotence/README.md b/manual/query_builder/idempotence/README.md
index 2f97151d277..1c44be31792 100644
--- a/manual/query_builder/idempotence/README.md
+++ b/manual/query_builder/idempotence/README.md
@@ -20,7 +20,7 @@ under the License.
## Idempotence in the query builder
When you generate a statement (or a statement builder) from the query builder, it automatically
-infers the [isIdempotent](../../core/idempotence/) flag:
+infers the [isIdempotent](../../core/idempotence/README.md) flag:
```java
SimpleStatement statement =
diff --git a/manual/query_builder/insert/README.md b/manual/query_builder/insert/README.md
index 6bac896d9b8..806eace154d 100644
--- a/manual/query_builder/insert/README.md
+++ b/manual/query_builder/insert/README.md
@@ -21,7 +21,7 @@ under the License.
To start an INSERT query, use one of the `insertInto` methods in [QueryBuilder]. There are
several variants depending on whether your table name is qualified, and whether you use
-[identifiers](../../case_sensitivity/) or raw strings:
+[identifiers](../../case_sensitivity/README.md) or raw strings:
```java
import static com.datastax.oss.driver.api.querybuilder.QueryBuilder.*;
@@ -46,7 +46,7 @@ insertInto("user")
// INSERT INTO user (id,first_name,last_name) VALUES (?,'John','Doe')
```
-The column names can only be simple identifiers. The values are [terms](../term).
+The column names can only be simple identifiers. The values are [terms](../term/README.md).
#### JSON insert
diff --git a/manual/query_builder/relation/README.md b/manual/query_builder/relation/README.md
index eb1c728888e..5ce171afe02 100644
--- a/manual/query_builder/relation/README.md
+++ b/manual/query_builder/relation/README.md
@@ -24,10 +24,10 @@ statement operates on.
Relations are used by the following statements:
-* [SELECT](../select/)
-* [UPDATE](../update/)
-* [DELETE](../delete/)
-* [CREATE MATERIALIZED VIEW](../schema/materialized_view/)
+* [SELECT](../select/README.md)
+* [UPDATE](../update/README.md)
+* [DELETE](../delete/README.md)
+* [CREATE MATERIALIZED VIEW](../schema/materialized_view/README.md)
The easiest way to add a relation is with a `whereXxx` method in the fluent API:
@@ -60,7 +60,7 @@ selectFrom("sensor_data").all()
```
Relations are generally composed of a left operand, an operator, and an optional right-hand-side
-[term](../term/). The type of relation determines which operators are available.
+[term](../term/README.md). The type of relation determines which operators are available.
### Simple columns
diff --git a/manual/query_builder/schema/README.md b/manual/query_builder/schema/README.md
index 0472c8e8c6f..ac8eb9bfe5b 100644
--- a/manual/query_builder/schema/README.md
+++ b/manual/query_builder/schema/README.md
@@ -19,7 +19,7 @@ under the License.
# Schema builder
-The schema builder is an additional API provided by [java-driver-query-builder](../) that enables
+The schema builder is an additional API provided by [java-driver-query-builder](../README.md) that enables
one to *generate CQL DDL queries programmatically**. For example it could be used to:
* based on application configuration, generate schema queries instead of building CQL strings by
@@ -46,7 +46,7 @@ try (CqlSession session = CqlSession.builder().build()) {
}
```
-The [general concepts](../#general-concepts) and [non goals](../#non-goals) defined for the query
+The [general concepts](../README.md#general-concepts) and [non goals](../README.md#non-goals) defined for the query
builder also apply for the schema builder.
### Building DDL Queries
@@ -55,12 +55,12 @@ The schema builder offers functionality for creating, altering and dropping elem
schema. For a complete tour of the API, browse the child pages in the manual for each schema
element type:
-* [keyspace](keyspace/)
-* [table](table/)
-* [index](index/)
-* [materialized view](materialized_view/)
-* [type](type/)
-* [function](function/)
-* [aggregate](aggregate/)
+* [keyspace](keyspace/README.md)
+* [table](table/README.md)
+* [index](index/README.md)
+* [materialized view](materialized_view/README.md)
+* [type](type/README.md)
+* [function](function/README.md)
+* [aggregate](aggregate/README.md)
[SchemaBuilder]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/querybuilder/SchemaBuilder.html
diff --git a/manual/query_builder/schema/materialized_view/README.md b/manual/query_builder/schema/materialized_view/README.md
index c4f495f95aa..659b5f345af 100644
--- a/manual/query_builder/schema/materialized_view/README.md
+++ b/manual/query_builder/schema/materialized_view/README.md
@@ -45,7 +45,7 @@ There are a number of steps that must be executed to complete a materialized vie
* Specify the base table using `asSelectFrom`
* Specify the columns to include in the view via `column` or `columns`
-* Specify the where clause using [relations](../../relation)
+* Specify the where clause using [relations](../../relation/README.md)
* Specify the partition key columns using `withPartitionKey` and `withClusteringColumn`
For example, the following defines a complete `CREATE MATERIALIZED VIEW` statement:
@@ -66,7 +66,7 @@ createMaterializedView("cycling", "cyclist_by_age")
Please note that not all WHERE clause relations may be compatible with materialized views.
-Like a [table](../table), one may additionally provide configuration such as clustering order,
+Like a [table](../table/README.md), one may additionally provide configuration such as clustering order,
compaction options and so on. Refer to [RelationStructure] for documentation on additional
configuration that may be provided for a view.
diff --git a/manual/query_builder/select/README.md b/manual/query_builder/select/README.md
index 0425423a402..fe1a5be8102 100644
--- a/manual/query_builder/select/README.md
+++ b/manual/query_builder/select/README.md
@@ -21,7 +21,7 @@ under the License.
Start your SELECT with the `selectFrom` method in [QueryBuilder]. There are several variants
depending on whether your table name is qualified, and whether you use
-[identifiers](../../case_sensitivity/) or raw strings:
+[identifiers](../../case_sensitivity/README.md) or raw strings:
```java
import static com.datastax.oss.driver.api.querybuilder.QueryBuilder.*;
@@ -321,7 +321,7 @@ selectFrom("foo").quotient(literal(1), Selector.column("a"));
// SELECT 1/a FROM foo
```
-See the [terms](../term/#literals) section for more details on literals.
+See the [terms](../term/README.md#literals) section for more details on literals.
#### Raw snippets
@@ -358,7 +358,7 @@ Like selectors, they also have fluent shortcuts to build and add in a single cal
Relations are a common feature used by many types of statements, so they have a
-[dedicated page](../relation) in this manual.
+[dedicated page](../relation/README.md) in this manual.
### Other clauses
diff --git a/manual/query_builder/term/README.md b/manual/query_builder/term/README.md
index 460ed8dcb10..2f030025894 100644
--- a/manual/query_builder/term/README.md
+++ b/manual/query_builder/term/README.md
@@ -21,9 +21,9 @@ under the License.
A term is an expression that does not involve the value of a column. It is used:
-* as an argument to some selectors, for example the indices of [sub-element](../select/#sub-element)
+* as an argument to some selectors, for example the indices of [sub-element](../select/README.md#sub-elements)
selectors;
-* as the right operand of [relations](../relation).
+* as the right operand of [relations](../relation/README.md).
To create a term, call one of the factory methods in [QueryBuilder]:
@@ -37,10 +37,10 @@ selectFrom("user").all().whereColumn("id").isEqualTo(literal(1));
```
The argument is converted according to the driver's
-[default type mappings](../../core/#cql-to-java-type-mapping). If there is no default mapping, you
+[default type mappings](../../core/README.md#cql-to-java-type-mapping). If there is no default mapping, you
will get a `CodecNotFoundException`.
-If you use [custom codecs](../../core/custom_codecs), you might need to inline a custom Java type.
+If you use [custom codecs](../../core/custom_codecs/README.md), you might need to inline a custom Java type.
You can pass a [CodecRegistry] as the second argument (most likely, this will be the registry of
your session):
diff --git a/manual/query_builder/truncate/README.md b/manual/query_builder/truncate/README.md
index c8cd6945123..9de93606293 100644
--- a/manual/query_builder/truncate/README.md
+++ b/manual/query_builder/truncate/README.md
@@ -21,7 +21,7 @@ under the License.
To create a TRUNCATE query, use one of the `truncate` methods in [QueryBuilder]. There are several
variants depending on whether your table name is qualified, and whether you use
-[identifiers](../../case_sensitivity/) or raw strings:
+[identifiers](../../case_sensitivity/README.md) or raw strings:
```java
import static com.datastax.oss.driver.api.querybuilder.QueryBuilder.*;
diff --git a/manual/query_builder/update/README.md b/manual/query_builder/update/README.md
index 15502f52bb7..e5484780e1d 100644
--- a/manual/query_builder/update/README.md
+++ b/manual/query_builder/update/README.md
@@ -21,7 +21,7 @@ under the License.
To start an UPDATE query, use one of the `update` methods in [QueryBuilder]. There are several
variants depending on whether your table name is qualified, and whether you use
-[identifiers](../../case_sensitivity/) or raw strings:
+[identifiers](../../case_sensitivity/README.md) or raw strings:
```java
import static com.datastax.oss.driver.api.querybuilder.QueryBuilder.*;
@@ -242,7 +242,7 @@ SimpleStatement statement = update("foo")
```
Relations are a common feature used by many types of statements, so they have a
-[dedicated page](../relation) in this manual.
+[dedicated page](../relation/README.md) in this manual.
### Conditions
@@ -268,7 +268,7 @@ update("foo")
```
Conditions are a common feature used by UPDATE and DELETE, so they have a
-[dedicated page](../condition) in this manual.
+[dedicated page](../condition/README.md) in this manual.
[QueryBuilder]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/querybuilder/QueryBuilder.html
[Assignment]: https://docs.datastax.com/en/drivers/java/4.17/com/datastax/oss/driver/api/querybuilder/update/Assignment.html
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 00000000000..d2075a57112
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,164 @@
+site_name: Java Driver for Apache Cassandra
+site_description: Java Driver for Apache Cassandra® Documentation
+site_url: https://apache.github.io/cassandra-java-driver
+repo_url: https://github.com/apache/cassandra-java-driver
+repo_name: apache/cassandra-java-driver
+
+docs_dir: mkdocs
+site_dir: docs
+
+theme:
+ name: material
+ palette:
+ - scheme: default
+ primary: blue
+ accent: blue
+ features:
+ - navigation.tabs
+ - navigation.sections
+ - navigation.top
+ - search.highlight
+ - search.share
+
+markdown_extensions:
+ - admonition
+ - codehilite
+ - pymdownx.superfences
+ - pymdownx.tabbed
+ - toc:
+ permalink: true
+
+nav:
+ - Home: README.md
+ - Manual:
+ - Overview: manual/README.md
+ - API Conventions: manual/api_conventions/README.md
+ - Case Sensitivity: manual/case_sensitivity/README.md
+ - Cloud: manual/cloud/README.md
+ - Core:
+ - Overview: manual/core/README.md
+ - Integration: manual/core/integration/README.md
+ - Configuration:
+ - Overview: manual/core/configuration/README.md
+ - Reference: manual/core/configuration/reference/README.md
+ - Authentication: manual/core/authentication/README.md
+ - SSL: manual/core/ssl/README.md
+ - Load Balancing: manual/core/load_balancing/README.md
+ - Pooling: manual/core/pooling/README.md
+ - Reconnection: manual/core/reconnection/README.md
+ - Retries: manual/core/retries/README.md
+ - Speculative Execution: manual/core/speculative_execution/README.md
+ - Metrics: manual/core/metrics/README.md
+ - Logging: manual/core/logging/README.md
+ - Statements:
+ - Overview: manual/core/statements/README.md
+ - Batch: manual/core/statements/batch/README.md
+ - Per Query Keyspace: manual/core/statements/per_query_keyspace/README.md
+ - Prepared: manual/core/statements/prepared/README.md
+ - Simple: manual/core/statements/simple/README.md
+ - Paging: manual/core/paging/README.md
+ - Async Programming: manual/core/async/README.md
+ - Reactive Streams: manual/core/reactive/README.md
+ - Custom Codecs: manual/core/custom_codecs/README.md
+ - Temporal Types: manual/core/temporal_types/README.md
+ - Tuples: manual/core/tuples/README.md
+ - UDTs: manual/core/udts/README.md
+ - Compression: manual/core/compression/README.md
+ - Address Resolution: manual/core/address_resolution/README.md
+ - Request Tracker: manual/core/request_tracker/README.md
+ - Request ID: manual/core/request_id/README.md
+ - Throttling: manual/core/throttling/README.md
+ - Tracing: manual/core/tracing/README.md
+ - Performance: manual/core/performance/README.md
+ - Metadata:
+ - Overview: manual/core/metadata/README.md
+ - Node: manual/core/metadata/node/README.md
+ - Schema: manual/core/metadata/schema/README.md
+ - Token: manual/core/metadata/token/README.md
+ - Control Connection: manual/core/control_connection/README.md
+ - Native Protocol: manual/core/native_protocol/README.md
+ - Non-blocking: manual/core/non_blocking/README.md
+ - Query Timestamps: manual/core/query_timestamps/README.md
+ - Idempotence: manual/core/idempotence/README.md
+ - Detachable Types: manual/core/detachable_types/README.md
+ - DSE:
+ - Overview: manual/core/dse/README.md
+ - Geotypes: manual/core/dse/geotypes/README.md
+ - Graph:
+ - Overview: manual/core/dse/graph/README.md
+ - Fluent:
+ - Overview: manual/core/dse/graph/fluent/README.md
+ - Explicit: manual/core/dse/graph/fluent/explicit/README.md
+ - Implicit: manual/core/dse/graph/fluent/implicit/README.md
+ - Options: manual/core/dse/graph/options/README.md
+ - Results: manual/core/dse/graph/results/README.md
+ - Script: manual/core/dse/graph/script/README.md
+ - GraalVM: manual/core/graalvm/README.md
+ - Shaded JAR: manual/core/shaded_jar/README.md
+ - BOM: manual/core/bom/README.md
+ - Query Builder:
+ - Overview: manual/query_builder/README.md
+ - Select: manual/query_builder/select/README.md
+ - Insert: manual/query_builder/insert/README.md
+ - Update: manual/query_builder/update/README.md
+ - Delete: manual/query_builder/delete/README.md
+ - Schema:
+ - Overview: manual/query_builder/schema/README.md
+ - Aggregate: manual/query_builder/schema/aggregate/README.md
+ - Function: manual/query_builder/schema/function/README.md
+ - Index: manual/query_builder/schema/index/README.md
+ - Keyspace: manual/query_builder/schema/keyspace/README.md
+ - Materialized View: manual/query_builder/schema/materialized_view/README.md
+ - Table: manual/query_builder/schema/table/README.md
+ - Type: manual/query_builder/schema/type/README.md
+ - Truncate: manual/query_builder/truncate/README.md
+ - Condition: manual/query_builder/condition/README.md
+ - Relation: manual/query_builder/relation/README.md
+ - Term: manual/query_builder/term/README.md
+ - Idempotence: manual/query_builder/idempotence/README.md
+ - Mapper:
+ - Overview: manual/mapper/README.md
+ - Entities: manual/mapper/entities/README.md
+ - DAOs:
+ - Overview: manual/mapper/daos/README.md
+ - Custom Types: manual/mapper/daos/custom_types/README.md
+ - Delete: manual/mapper/daos/delete/README.md
+ - Get Entity: manual/mapper/daos/getentity/README.md
+ - Increment: manual/mapper/daos/increment/README.md
+ - Insert: manual/mapper/daos/insert/README.md
+ - Null Saving: manual/mapper/daos/null_saving/README.md
+ - Query: manual/mapper/daos/query/README.md
+ - Query Provider: manual/mapper/daos/queryprovider/README.md
+ - Select: manual/mapper/daos/select/README.md
+ - Set Entity: manual/mapper/daos/setentity/README.md
+ - Statement Attributes: manual/mapper/daos/statement_attributes/README.md
+ - Update: manual/mapper/daos/update/README.md
+ - Mapper: manual/mapper/mapper/README.md
+ - Configuration:
+ - Overview: manual/mapper/config/README.md
+ - Kotlin: manual/mapper/config/kotlin/README.md
+ - Lombok: manual/mapper/config/lombok/README.md
+ - Record: manual/mapper/config/record/README.md
+ - Scala: manual/mapper/config/scala/README.md
+ - Developer:
+ - Overview: manual/developer/README.md
+ - Common:
+ - Overview: manual/developer/common/README.md
+ - Concurrency: manual/developer/common/concurrency/README.md
+ - Context: manual/developer/common/context/README.md
+ - Event Bus: manual/developer/common/event_bus/README.md
+ - Native Protocol: manual/developer/native_protocol/README.md
+ - Netty Pipeline: manual/developer/netty_pipeline/README.md
+ - Request Execution: manual/developer/request_execution/README.md
+ - Admin: manual/developer/admin/README.md
+ - OSGi: manual/osgi/README.md
+ - API References: api/index.html
+ - FAQ: faq/README.md
+ - Changelog: changelog/README.md
+ - Upgrade Guide: upgrade_guide/README.md
+ - Contribute: CONTRIBUTING.md
+
+plugins:
+ - search
+ - awesome-pages
+ - macros
diff --git a/mkdocs/CONTRIBUTING.md b/mkdocs/CONTRIBUTING.md
new file mode 120000
index 00000000000..44fcc634393
--- /dev/null
+++ b/mkdocs/CONTRIBUTING.md
@@ -0,0 +1 @@
+../CONTRIBUTING.md
\ No newline at end of file
diff --git a/mkdocs/README.md b/mkdocs/README.md
new file mode 120000
index 00000000000..32d46ee883b
--- /dev/null
+++ b/mkdocs/README.md
@@ -0,0 +1 @@
+../README.md
\ No newline at end of file
diff --git a/mkdocs/api b/mkdocs/api
new file mode 120000
index 00000000000..fca923bc74c
--- /dev/null
+++ b/mkdocs/api
@@ -0,0 +1 @@
+../target/site/apidocs
\ No newline at end of file
diff --git a/mkdocs/changelog b/mkdocs/changelog
new file mode 120000
index 00000000000..0891c85c57a
--- /dev/null
+++ b/mkdocs/changelog
@@ -0,0 +1 @@
+../changelog
\ No newline at end of file
diff --git a/mkdocs/faq b/mkdocs/faq
new file mode 120000
index 00000000000..456533382a0
--- /dev/null
+++ b/mkdocs/faq
@@ -0,0 +1 @@
+../faq
\ No newline at end of file
diff --git a/mkdocs/manual b/mkdocs/manual
new file mode 120000
index 00000000000..0fd27f4be61
--- /dev/null
+++ b/mkdocs/manual
@@ -0,0 +1 @@
+../manual
\ No newline at end of file
diff --git a/mkdocs/upgrade_guide b/mkdocs/upgrade_guide
new file mode 120000
index 00000000000..3bfa50a874a
--- /dev/null
+++ b/mkdocs/upgrade_guide
@@ -0,0 +1 @@
+../upgrade_guide
\ No newline at end of file
diff --git a/pom.xml b/pom.xml
index 6834cdd1882..f3008708c32 100644
--- a/pom.xml
+++ b/pom.xml
@@ -804,6 +804,23 @@ limitations under the License.]]>
true
all,-missing
com.datastax.*.driver.internal*
+
+
+ com.google.errorprone
+ error_prone_annotations
+ 2.3.4
+
+
+ com.github.stephenc.jcip
+ jcip-annotations
+ 1.0-1
+
+
+ com.github.spotbugs
+ spotbugs-annotations
+ 3.1.12
+
+
apiNote
@@ -882,7 +899,49 @@ limitations under the License.]]>
false
+
+
+ aggregate-javadoc
+
+ aggregate
+
+ site
+
+ false
+
+ org.apache.cassandra:java-driver-core
+ org.apache.cassandra:java-driver-query-builder
+ org.apache.cassandra:java-driver-mapper-runtime
+
+
+ core
+ query-builder
+ mapper-runtime
+
+ Apache Cassandra Java Driver ${project.version} API
+ The Apache Software Foundation. All rights reserved.
+ ]]>
+
+
+
+
+ com.google.errorprone
+ error_prone_annotations
+ 2.3.4
+
+
+ com.github.stephenc.jcip
+ jcip-annotations
+ 1.0-1
+
+
+ com.github.spotbugs
+ spotbugs-annotations
+ 3.1.12
+
+
maven-release-plugin
diff --git a/upgrade_guide/README.md b/upgrade_guide/README.md
index 56d55aaab36..c3cfe2e8311 100644
--- a/upgrade_guide/README.md
+++ b/upgrade_guide/README.md
@@ -80,7 +80,7 @@ session.execute(preparedInsert.bind(3, vector));
In some cases, it makes sense to access the vector directly as an array of some numerical type. This version
supports such use cases by providing a codec which translates a CQL vector to and from a primitive array. Only float arrays are supported.
-You can find more information about this codec in the manual documentation on [custom codecs](../manual/core/custom_codecs/)
+You can find more information about this codec in the manual documentation on [custom codecs](../manual/core/custom_codecs/README.md)
### 4.15.0
@@ -149,7 +149,7 @@ explicitly declare a dependency on the Esri library:
```
-See the [integration](../manual/core/integration/#esri) section in the manual for more details.
+See the [integration](../manual/core/integration/README.md#esri) section in the manual for more details.
### 4.13.0
@@ -162,7 +162,7 @@ If you were building a native image for your application, please verify your nat
configuration. Most of the extra configuration required until now is likely to not be necessary
anymore.
-Refer to this [manual page](../manual/core/graalvm) for details.
+Refer to this [manual page](../manual/core/graalvm/README.md) for details.
#### Registration of multiple listeners and trackers
@@ -257,8 +257,8 @@ row or in the target statement, *leaving unmatched properties untouched*.
This new, lenient behavior allows to achieve the equivalent of driver 3.x
[lenient mapping](https://docs.datastax.com/en/developer/java-driver/3.10/manual/object_mapper/using/#manual-mapping).
-Read the manual pages on [@GetEntity](../manual/mapper/daos/getentity) methods and
-[@SetEntity](../manual/mapper/daos/setentity) methods for more details and examples of lenient mode.
+Read the manual pages on [@GetEntity](../manual/mapper/daos/getentity/README.md) methods and
+[@SetEntity](../manual/mapper/daos/setentity/README.md) methods for more details and examples of lenient mode.
### 4.11.0
@@ -275,9 +275,9 @@ transparently selected as the protocol version to use.
[JAVA-2872](https://datastax-oss.atlassian.net/browse/JAVA-2872) introduced the ability to configure
how metric identifiers are generated. Metric names can now be configured, but most importantly,
-metric tags are now supported. See the [metrics](../manual/core/metrics/) section of the online
+metric tags are now supported. See the [metrics](../manual/core/metrics/README.md) section of the online
manual, or the `advanced.metrics.id-generator` section in the
-[reference.conf](../manual/core/configuration/reference/) file for details.
+[reference.conf](../manual/core/configuration/reference/README.md) file for details.
Users should not experience any disruption. However, those using metrics libraries that support tags
are encouraged to try out the new `TaggingMetricIdGenerator`, as it generates metric names and tags
@@ -334,7 +334,7 @@ has been deprecated; it should be replaced with a node distance evaluator class
[JAVA-2899](https://datastax-oss.atlassian.net/browse/JAVA-2899) re-introduced the ability to
perform cross-datacenter failover using the driver's built-in load balancing policies. See [Load
-balancing](../manual/core/loadbalancing/) in the manual for details.
+balancing](../manual/core/load_balancing/README.md) in the manual for details.
Cross-datacenter failover is disabled by default, therefore existing applications should not
experience any disruption.
@@ -443,7 +443,7 @@ your POM file:
```
-See the [integration](../manual/core/integration/#tinker-pop) section in the manual for more details
+See the [integration](../manual/core/integration/README.md#tinkerpop) section in the manual for more details
as well as a driver vs. TinkerPop version compatibility matrix.
### 4.5.x - 4.6.0
@@ -459,7 +459,7 @@ separate DSE driver.
#### For Apache Cassandra® users
-The great news is that [reactive execution](../manual/core/reactive/) is now available for everyone.
+The great news is that [reactive execution](../manual/core/reactive/README.md) is now available for everyone.
See the `CqlSession.executeReactive` methods.
Apart from that, the only visible change is that DSE-specific features are now exposed in the API:
@@ -468,7 +468,7 @@ Apart from that, the only visible change is that DSE-specific features are now e
have default implementations so this doesn't break binary compatibility. You can just ignore them.
* new driver dependencies: TinkerPop, ESRI, Reactive Streams. If you want to keep your classpath
lean, you can exclude some dependencies when you don't use the corresponding DSE features; see the
- [Integration>Driver dependencies](../manual/core/integration/#driver-dependencies) section.
+ [Integration>Driver dependencies](../manual/core/integration/README.md#driver-dependencies) section.
#### For DataStax Enterprise users
@@ -509,7 +509,7 @@ changes right away; but you will get deprecation warnings:
* `DseDriverConfigLoader`: the driver no longer needs DSE-specific config loaders. All the factory
methods in this class now redirect to `DriverConfigLoader`. On that note, `dse-reference.conf`
does not exist anymore, all the driver defaults are now in
- [reference.conf](../manual/core/configuration/reference/).
+ [reference.conf](../manual/core/configuration/reference/README.md).
* plain-text authentication: there is now a single implementation that works with both Cassandra and
DSE. If you used `DseProgrammaticPlainTextAuthProvider`, replace it by
`PlainTextProgrammaticAuthProvider`. Similarly, if you wrote a custom implementation by
@@ -559,7 +559,7 @@ a few notable differences:
* the "mapper" and "accessor" concepts have been unified into a single "DAO" component, that handles
both pre-defined CRUD patterns, and user-provided queries.
-Refer to the [mapper manual](../manual/mapper/) for all the details.
+Refer to the [mapper manual](../manual/mapper/README.md) for all the details.
#### Internal API
@@ -637,7 +637,7 @@ Notable changes:
* simple statement instances are now created with the `newInstance` static factory method. This is
because `SimpleStatement` is now an interface (as most public API types).
-[API conventions]: ../manual/api_conventions
+[API conventions]: ../manual/api_conventions/README.md
#### Configuration
@@ -705,9 +705,9 @@ This is fully customizable: the configuration is exposed to the rest of the driv
`DriverConfig` interface; if the default implementation doesn't work for you, you can write your
own.
-For more details, refer to the [manual](../manual/core/configuration).
+For more details, refer to the [manual](../manual/core/configuration/README.md).
-[Typesafe Config]: https://github.com/typesafehub/config
+[Typesafe Config]: https://github.com/typesafehub/config/README.md
#### Session
@@ -725,7 +725,7 @@ to the best common denominator (see
[JAVA-1295](https://datastax-oss.atlassian.net/browse/JAVA-1295)).
Reconnection is now possible at startup: if no contact point is reachable, the driver will retry at
-periodic intervals (controlled by the [reconnection policy](../manual/core/reconnection/)) instead
+periodic intervals (controlled by the [reconnection policy](../manual/core/reconnection/README.md)) instead
of throwing an error. To turn this on, set the following configuration option:
```
@@ -734,7 +734,7 @@ datastax-java-driver {
}
```
-The session now has a built-in [throttler](../manual/core/throttling/) to limit how many requests
+The session now has a built-in [throttler](../manual/core/throttling/README.md) to limit how many requests
can execute concurrently. Here's an example based on the number of requests (a rate-based variant is
also available):
@@ -754,7 +754,7 @@ Previous driver versions came with multiple load balancing policies that could b
other. In our experience, this was one of the most complicated aspects of the configuration.
In driver 4, we are taking a more opinionated approach: we provide a single [default
-policy](../manual/core/load_balancing/#default-policy), with what we consider as the best practices:
+policy](../manual/core/load_balancing/README.md), with what we consider as the best practices:
* local only: we believe that failover should be handled at infrastructure level, not by application
code.
@@ -765,7 +765,7 @@ You can still provide your own policy by implementing the `LoadBalancingPolicy`
#### Statements
-Simple, bound and batch [statements](../manual/core/statements/) are now exposed in the public API
+Simple, bound and batch [statements](../manual/core/statements/README.md) are now exposed in the public API
as interfaces. The internal implementations are **immutable**. This makes them automatically
thread-safe: you don't need to worry anymore about sharing them or reusing them between asynchronous
executions.
@@ -793,7 +793,7 @@ maximum amount of time that `session.execute` will take, including any retry, sp
etc. You can set it with `Statement.setTimeout`, or globally in the configuration with the
`basic.request.timeout` option.
-[Prepared statements](../manual/core/statements/prepared/) are now cached client-side: if you call
+[Prepared statements](../manual/core/statements/prepared/README.md) are now cached client-side: if you call
`session.prepare()` twice with the same query string, it will no longer log a warning. The second
call will return the same statement instance, without sending anything to the server:
@@ -827,7 +827,7 @@ assert bs2.getConsistencyLevel() == DefaultConsistencyLevel.TWO;
```
DDL statements are now debounced; see [Why do DDL queries have a higher latency than driver
-3?](../faq/#why-do-ddl-queries-have-a-higher-latency-than-driver-3) in the FAQ.
+3?](../faq/README.md#why-do-ddl-queries-have-a-higher-latency-than-driver-3) in the FAQ.
#### Dual result set APIs
@@ -850,8 +850,8 @@ will find more information about asynchronous iterations in the manual pages abo
programming][4.x async programming] and [paging][4.x paging].
[3.x async paging]: http://docs.datastax.com/en/developer/java-driver/3.2/manual/async/#async-paging
-[4.x async programming]: ../manual/core/async/
-[4.x paging]: ../manual/core/paging/
+[4.x async programming]: ../manual/core/async/README.md
+[4.x paging]: ../manual/core/paging/README.md
#### CQL to Java type mappings
@@ -866,8 +866,8 @@ changed when it comes to [temporal types] such as `date` and `timestamp`:
The corresponding setter methods were also changed to expect these new types as inputs.
-[CQL to Java type mappings]: ../manual/core#cql-to-java-type-mapping
-[temporal types]: ../manual/core/temporal_types
+[CQL to Java type mappings]: ../manual/core/README.md#cql-to-java-type-mapping
+[temporal types]: ../manual/core/temporal_types/README.md
[java.time.LocalDate]: https://docs.oracle.com/javase/8/docs/api/java/time/LocalDate.html
[java.time.LocalTime]: https://docs.oracle.com/javase/8/docs/api/java/time/LocalTime.html
[java.time.Instant]: https://docs.oracle.com/javase/8/docs/api/java/time/Instant.html
@@ -875,7 +875,7 @@ The corresponding setter methods were also changed to expect these new types as
#### Metrics
-[Metrics](../manual/core/metrics/) are now divided into two categories: session-wide and per-node.
+[Metrics](../manual/core/metrics/README.md) are now divided into two categories: session-wide and per-node.
Each metric can be enabled or disabled individually in the configuration:
```
@@ -940,7 +940,7 @@ datastax-java-driver {
}
```
-See the [manual](../manual/core/metadata/) for all the details.
+See the [manual](../manual/core/metadata/README.md) for all the details.
#### Query builder
@@ -980,14 +980,14 @@ SimpleStatement statement = query
All query builder types are immutable, making them inherently thread-safe and share-safe.
-The query builder has its own [manual chapter](../manual/query_builder/), where the syntax is
+The query builder has its own [manual chapter](../manual/query_builder/README.md), where the syntax is
covered in detail.
#### Dedicated type for CQL identifiers
Instead of raw strings, the names of schema objects (keyspaces, tables, columns, etc.) are now
wrapped in a dedicated `CqlIdentifier` type. This avoids ambiguities with regard to [case
-sensitivity](../manual/case_sensitivity).
+sensitivity](../manual/case_sensitivity/README.md).
#### Pluggable request execution logic