Skip to content

Commit cf2c9de

Browse files
authored
Merge branch 'main' into add-prod-build-notify-workflow
2 parents 70f139c + 2ee94b6 commit cf2c9de

File tree

168 files changed

+413
-208
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

168 files changed

+413
-208
lines changed

src/current/_data/releases.yml

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9561,3 +9561,59 @@
95619561
docker_arm_limited_access: false
95629562
source: true
95639563
previous_release: v25.4.0-alpha.2
9564+
9565+
9566+
- release_name: v25.4.0-beta.2
9567+
major_version: v25.4
9568+
release_date: '2025-10-10'
9569+
release_type: Testing
9570+
go_version: go1.23.12
9571+
sha: c9fbbfff437b9283883a269402038f5952cc4863
9572+
has_sql_only: true
9573+
has_sha256sum: true
9574+
mac:
9575+
mac_arm: true
9576+
mac_arm_experimental: true
9577+
mac_arm_limited_access: false
9578+
windows: true
9579+
linux:
9580+
linux_arm: true
9581+
linux_arm_experimental: false
9582+
linux_arm_limited_access: false
9583+
linux_intel_fips: true
9584+
linux_arm_fips: false
9585+
docker:
9586+
docker_image: cockroachdb/cockroach-unstable
9587+
docker_arm: true
9588+
docker_arm_experimental: false
9589+
docker_arm_limited_access: false
9590+
source: true
9591+
previous_release: v25.4.0-beta.1
9592+
9593+
9594+
- release_name: v25.4.0-beta.3
9595+
major_version: v25.4
9596+
release_date: '2025-10-16'
9597+
release_type: Testing
9598+
go_version: go1.23.12
9599+
sha: d70350c0c33cc6e0f0a58e0ec9b5e52b9ba40661
9600+
has_sql_only: true
9601+
has_sha256sum: true
9602+
mac:
9603+
mac_arm: true
9604+
mac_arm_experimental: true
9605+
mac_arm_limited_access: false
9606+
windows: true
9607+
linux:
9608+
linux_arm: true
9609+
linux_arm_experimental: false
9610+
linux_arm_limited_access: false
9611+
linux_intel_fips: true
9612+
linux_arm_fips: false
9613+
docker:
9614+
docker_image: cockroachdb/cockroach-unstable
9615+
docker_arm: true
9616+
docker_arm_experimental: false
9617+
docker_arm_limited_access: false
9618+
source: true
9619+
previous_release: v25.4.0-beta.2

src/current/_data/versions.csv

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,4 +18,4 @@ v24.3,2024-11-18,2025-11-18,2026-05-18,24.3.11,24.3.12,2025-05-05,2026-05-05,202
1818
v25.1,2025-02-18,2025-08-18,N/A,N/A,N/A,N/A,N/A,N/A,v24.3,release-25.1,2029-02-18
1919
v25.2,2025-05-09,2026-05-12,2026-11-12,N/A,N/A,N/A,N/A,N/A,v25.1,release-25.2,2029-05-09
2020
v25.3,2025-08-04,2026-02-04,N/A,N/A,N/A,N/A,N/A,N/A,v25.2,release-25.3,2029-08-04
21-
v25.4,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,v25.3,release-25.3,N/A
21+
v25.4,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,v25.3,release-25.4,N/A

src/current/_includes/cockroachcloud/cdc/create-core-changefeed-avro.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ In this example, you'll set up a core changefeed for a single-node cluster that
1010
--background
1111
~~~
1212

13-
1. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/).
13+
1. Download and extract the [Confluent platform](https://www.confluent.io/download/).
1414

1515
1. Move into the extracted `confluent-<version>` directory and start Confluent:
1616

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
{% if page.path contains "cockroachcloud" %}
2+
{{ site.data.alerts.callout_danger }}
3+
Cockroach Labs does not officially support S3-compatible storage solutions other than AWS S3, Google Cloud Storage (GCS), and Azure Blob Storage. Some common compatibility issues may be fixed by adding the `AWS_SKIP_CHECKSUM` option to the S3 URLs.
4+
5+
The [Cockroach Labs Support team]({% link {{ site.current_cloud_version }}/support-resources.md %}) is available to offer assistance where possible. If you encounter issues when using unsupported S3-compatible storage, drivers, or frameworks, contact the maintainer directly.
6+
{{ site.data.alerts.end }}
7+
{% else %}
8+
{{ site.data.alerts.callout_danger }}
9+
Cockroach Labs does not officially support S3-compatible storage solutions other than AWS S3, Google Cloud Storage (GCS), and Azure Blob Storage.{% if page.version.version !="v24.1" %} Some common compatibility issues may be fixed by adding the `AWS_SKIP_CHECKSUM` option to the S3 URLs.{% endif %}.
10+
11+
12+
The [Cockroach Labs Support team]({% link {{page.version.version}}/support-resources.md %}) is available to offer assistance where possible. If you encounter issues when using unsupported S3-compatible storage, drivers, or frameworks, contact the maintainer.
13+
{{ site.data.alerts.end }}
14+
{% endif %}
Lines changed: 65 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,65 @@
1+
## v25.4.0-beta.2
2+
3+
Release Date: October 10, 2025
4+
5+
{% include releases/new-release-downloads-docker-image.md release=include.release %}
6+
7+
<h3 id="v25-4-0-beta-2-general-changes">General changes</h3>
8+
9+
- The changefeed bulk
10+
delivery setting was made optional. [#154953][#154953]
11+
12+
<h3 id="v25-4-0-beta-2-sql-language-changes">SQL language changes</h3>
13+
14+
- Added the `SHOW INSPECT ERRORS` command. This command can be used to view issues that are identified by running the `INSPECT` command to validate tables and indexes. [#154337][#154337]
15+
- Added the `sql.catalog.allow_leased_descriptors.enabled` cluster setting, which is false by default. When set to true, queries that access the `pg_catalog` or `information_schema` can use cached leased descriptors to populate the data in those tables, with the tradeoff that some of the data could be stale. [#154491][#154491]
16+
- We now support index acceleration for a subset of jsonb_path_exists filters. Given the `jsonb_path_exists(json_obj, json_path_expression)`, we only support inverted index for json_path_expression that matches one of the following patterns:
17+
- The json_path_expression must NOT be in STRICT mode.
18+
- keychain mode: $.[key|wildcard].[key|wildcard]...
19+
- For this mode, we will generate a prefix span for the inverted expression.
20+
- filter with end value mode, with equality check: $.[key|wildcard]? (@.[key|wildcard].[key|wildcard]... == [string|number|null|boolean])
21+
- For this mode, since the end value is fixed, we will generate a single value span.
22+
- Specifically, we don't support the following edge case:
23+
- $
24+
- $[*]
25+
- $.a.b.c == 12 or $.a.b.c > 12 or $.a.b.c < 12 (operation expression)
26+
- $.a.b ? (@.a > 10) (filter, with inequality check)
27+
- Note that the cases we support is to use `jsonb_path_exists` in filters, as in, when they are used in the WHERE clause. [#154631][#154631]
28+
- The optimizer can now use table statistics that merge the latest full statistic with all newer partial statistics, including those over arbitrary constraints over a single span. [#154755][#154755]
29+
30+
<h3 id="v25-4-0-beta-2-operational-changes">Operational changes</h3>
31+
32+
- Two new changefeed metrics for tracking the max skew between a changefeed's slowest and fastest span/table have been added. The metrics are gauge metrics with the names `changefeed.progress_skew.{span,table}`. [#154166][#154166]
33+
- The metrics `sql.select.started.count`, `sql.insert.started.count`, `sql.update.started.count`, and `sql.delete.started.count` are now emitted with labels under the common metric name `sql.started.count`, using a `query_type` label to distinguish each operation. [#154388][#154388]
34+
- Added the cluster setting `storage.unhealthy_write_duration` (defaults to 20s), which is used to indicate to the allocator that a store's disk is unhealthy. The cluster setting `kv.allocator.disk_unhealthy_io_overload_score` controls the overload score assigned to a store with an unhealthy disk, where a higher score results in preventing lease or replica transfers to the store, or shedding of leases by the store. The default value of that setting is 0, so the allocator behavior is unaffected. [#154459][#154459]
35+
- Added cluster setting `sql.schema.approx_max_object_count` (default: 20,000) to prevent creation of new schema objects when the limit is exceeded. The check uses cached table statistics for performance and is approximate - it may not be immediately accurate until table statistics are updated by the background statistics refreshing job. Clusters that have been running stably with a larger object count should raise the limit or disable the limit by setting the value to 0. In future releases, the default value for this setting will be raised as more CockroachDB features support larger object counts. [#154576][#154576]
36+
37+
<h3 id="v25-4-0-beta-2-bug-fixes">Bug fixes</h3>
38+
39+
- Vector index backfill will now properly track job progress in SHOW JOBS output. [#154261][#154261]
40+
- A bug has been fixed that caused panics when executing `COPY` into a table with hidden columns and expression indexes. The panic only occurred when the `expect_and_ignore_not_visible_columns_in_copy` setting was enabled. This bug has been present since `expect_and_ignore_not_visible_columns_in_copy` was introduced in v22.1.0. [#154289][#154289]
41+
- **Idle latency** on the **Transaction Details** page in the DB Console is now reported more accurately. Previously, transactions that used prepared statements (e.g., with placeholders) overcounted idle time, while those that included observer statements (common in the SQL CLI) undercounted it. [#154385][#154385]
42+
- Fixed a bug where `RESTORE` of a database with a `SECONDARY REGION` did not apply the lease preferences for that region. [#154659][#154659]
43+
- A bug where a changefeed could perform many unnecessary job progress saves during an initial scan has been fixed. [#154709][#154709]
44+
- A bug where a changefeed targeting only a subset of a table's column families could become stuck has been fixed. [#154915][#154915]
45+
46+
<h3 id="v25-4-0-beta-2-performance-improvements">Performance improvements</h3>
47+
48+
- The cost of generic query plans is now calculated based on worst-case selectivities for placeholder equalities (e.g., x = $1). This reduces the chance of suboptimal generic query plans being chosen when `plan_cache_mode=auto`. [#154899][#154899]
49+
50+
[#154337]: https://github.com/cockroachdb/cockroach/pull/154337
51+
[#154491]: https://github.com/cockroachdb/cockroach/pull/154491
52+
[#154388]: https://github.com/cockroachdb/cockroach/pull/154388
53+
[#154459]: https://github.com/cockroachdb/cockroach/pull/154459
54+
[#154385]: https://github.com/cockroachdb/cockroach/pull/154385
55+
[#154755]: https://github.com/cockroachdb/cockroach/pull/154755
56+
[#154576]: https://github.com/cockroachdb/cockroach/pull/154576
57+
[#154915]: https://github.com/cockroachdb/cockroach/pull/154915
58+
[#154631]: https://github.com/cockroachdb/cockroach/pull/154631
59+
[#154261]: https://github.com/cockroachdb/cockroach/pull/154261
60+
[#154659]: https://github.com/cockroachdb/cockroach/pull/154659
61+
[#154953]: https://github.com/cockroachdb/cockroach/pull/154953
62+
[#154166]: https://github.com/cockroachdb/cockroach/pull/154166
63+
[#154289]: https://github.com/cockroachdb/cockroach/pull/154289
64+
[#154709]: https://github.com/cockroachdb/cockroach/pull/154709
65+
[#154899]: https://github.com/cockroachdb/cockroach/pull/154899
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
## v25.4.0-beta.3
2+
3+
Release Date: October 16, 2025
4+
5+
{% include releases/new-release-downloads-docker-image.md release=include.release %}
6+
7+
<h3 id="v25-4-0-beta-3-bug-fixes">Bug fixes</h3>
8+
9+
- Fixed a bug that caused internal errors for `INSERT .. ON CONFLICT .. DO UPDATE` statements when the target table had both a computed column and a `BEFORE` trigger. This bug was present since triggers were introduced in v24.3.0. [#155077][#155077]
10+
11+
12+
[#155077]: https://github.com/cockroachdb/cockroach/pull/155077

src/current/_includes/v24.1/performance/increase-server-side-retries.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22

33
<a id="result-buffer-size"></a>
44

5-
- Limit the size of the result sets of your transactions to under 16KB, so that CockroachDB is more likely to [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) when [previous reads are invalidated]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at a [pushed timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). When a transaction returns a result set over 16KB, even if that transaction has been sent as a single batch, CockroachDB cannot automatically retry the transaction. You can change the results buffer size for all new sessions using the `sql.defaults.results_buffer.size` [cluster setting](cluster-settings.html), or for a specific session using the `results_buffer_size` [connection parameter]({% link {{page.version.version}}/connection-parameters.md %}#additional-connection-parameters).
5+
- Limit the size of the result sets of your transactions to less than the value of the [`sql.defaults.results_buffer.size` cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size), so that CockroachDB is more likely to [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) when [previous reads are invalidated]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at a [pushed timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). When a transaction returns a result set larger than the configured buffer size, even if that transaction has been sent as a single batch, CockroachDB cannot automatically retry the transaction. You can change the results buffer size for all new sessions using the [`sql.defaults.results_buffer.size` cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size), or for a specific session using the `results_buffer_size` [connection parameter]({% link {{page.version.version}}/connection-parameters.md %}#additional-connection-parameters).
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
To use super regions, keep the following considerations in mind:
22

33
- Your cluster must be a [multi-region cluster]({% link {{ page.version.version }}/multiregion-overview.md %}).
4-
- Super regions [must be enabled](#enable-super-regions).
4+
- Super regions [must be enabled]{% if page.name == "show-super-regions.md" %}(#enable-super-regions){% else %}({% link {{ page.version.version }}/alter-database.md %}#enable-super-regions){% endif %}.
55
- Super regions can only contain one or more [database regions]({% link {{ page.version.version }}/multiregion-overview.md %}#database-regions) that have already been added with [`ALTER DATABASE ... ADD REGION`]({% link {{ page.version.version }}/alter-database.md %}#add-region).
66
- Each database region can only belong to one super region. In other words, given two super regions _A_ and _B_, the set of database regions in _A_ must be [disjoint](https://wikipedia.org/wiki/Disjoint_sets) from the set of database regions in _B_.
77
- You cannot [drop a region]({% link {{ page.version.version }}/alter-database.md %}#drop-region) that is part of a super region until you either [alter the super region]({% link {{ page.version.version }}/alter-database.md %}#alter-super-region) to remove it, or [drop the super region]({% link {{ page.version.version }}/alter-database.md %}#drop-super-region) altogether.

src/current/_includes/v24.3/cdc/create-sinkless-changefeed-avro.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ In this example, you'll set up a basic changefeed for a single-node cluster that
1010
--background
1111
~~~
1212

13-
1. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/).
13+
1. Download and extract the [Confluent platform](https://www.confluent.io/download/).
1414

1515
1. Move into the extracted `confluent-<version>` directory and start Confluent:
1616

src/current/_includes/v24.3/performance/increase-server-side-retries.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22

33
<a id="result-buffer-size"></a>
44

5-
- Limit the size of the result sets of your transactions to under 16KB, so that CockroachDB is more likely to [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) when [previous reads are invalidated]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at a [pushed timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). When a transaction returns a result set over 16KB, even if that transaction has been sent as a single batch, CockroachDB cannot automatically retry the transaction. You can change the results buffer size for all new sessions using the `sql.defaults.results_buffer.size` [cluster setting](cluster-settings.html), or for a specific session using the `results_buffer_size` [connection parameter]({% link {{page.version.version}}/connection-parameters.md %}#additional-connection-parameters).
5+
- Limit the size of the result sets of your transactions to less than the value of the [`sql.defaults.results_buffer.size` cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size), so that CockroachDB is more likely to [automatically retry]({% link {{ page.version.version }}/transactions.md %}#automatic-retries) when [previous reads are invalidated]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#read-refreshing) at a [pushed timestamp]({% link {{ page.version.version }}/architecture/transaction-layer.md %}#timestamp-cache). When a transaction returns a result set larger than the configured buffer size, even if that transaction has been sent as a single batch, CockroachDB cannot automatically retry the transaction. You can change the results buffer size for all new sessions using the [`sql.defaults.results_buffer.size` cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-defaults-results-buffer-size), or for a specific session using the `results_buffer_size` [connection parameter]({% link {{page.version.version}}/connection-parameters.md %}#additional-connection-parameters).

0 commit comments

Comments
 (0)