Skip to content

Commit 25f5d44

Browse files
committed
sql: log when optimizer estimates for scans are inaccurate
This commit logs a warning on the gateway node when the estimated row count for a logical scan is inaccurate. The `DistSQLReceiver` on the gateway node now maintains the row count estimate and metadata for each logical scan stage. Table reader metrics are extended to include their StageID, which we use to aggregate the emitted row counts from all table readers processors corresponding to the same plan stage at the receiver. An estimate is considered inaccurate if it is off by at least a factor of 2 and a fixed offset of 100, matching the logic in the warning from `EXPLAIN ANALYZE`. The log message includes the table and index being scanned, the estimated and actual row counts, the time since the last table stats collection, and the table's estimated staleness. This log is gated behind a new cluster setting, `sql.log.scan_row_count_misestimate.enabled` (default off). Logging only happens for user tables and is rate limited to log misestimates from at most 1 query every 10 seconds. Fixes: #153748, #153873 Release note (sql change): Added a default-off cluster setting (`sql.log.scan_row_count_misestimate.enabled`) that enables logging a warning on the gateway node when optimizer estimates for scans are inaccurate. The log message includes the table and index being scanned, the estimated and actual row counts, the time since the last table stats collection, and the table's estimated staleness.
1 parent b8adb07 commit 25f5d44

File tree

18 files changed

+202
-20
lines changed

18 files changed

+202
-20
lines changed

docs/generated/settings/settings-for-tenants.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -323,6 +323,7 @@ sql.insights.execution_insights_capacity integer 1000 the size of the per-node s
323323
sql.insights.high_retry_count.threshold integer 10 the number of retries a slow statement must have undergone for its high retry count to be highlighted as a potential problem application
324324
sql.insights.latency_threshold duration 100ms amount of time after which an executing statement is considered slow. Use 0 to disable. application
325325
sql.log.redact_names.enabled boolean false if set, schema object identifers are redacted in SQL statements that appear in event logs application
326+
sql.log.scan_row_count_misestimate.enabled boolean false when set to true, log a warning when a scan's actual row count differs significantly from the optimizer's estimate application
326327
sql.log.slow_query.experimental_full_table_scans.enabled boolean false when set to true, statements that perform a full table/index scan will be logged to the slow query log even if they do not meet the latency threshold. Must have the slow query log enabled for this setting to have any effect. application
327328
sql.log.slow_query.internal_queries.enabled boolean false when set to true, internal queries which exceed the slow query log threshold are logged to a separate log. Must have the slow query log enabled for this setting to have any effect. application
328329
sql.log.slow_query.latency_threshold duration 0s when set to non-zero, log statements whose service latency exceeds the threshold to a secondary logger on each node application

docs/generated/settings/settings.html

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -278,6 +278,7 @@
278278
<tr><td><div id="setting-sql-insights-high-retry-count-threshold" class="anchored"><code>sql.insights.high_retry_count.threshold</code></div></td><td>integer</td><td><code>10</code></td><td>the number of retries a slow statement must have undergone for its high retry count to be highlighted as a potential problem</td><td>Basic/Standard/Advanced/Self-Hosted</td></tr>
279279
<tr><td><div id="setting-sql-insights-latency-threshold" class="anchored"><code>sql.insights.latency_threshold</code></div></td><td>duration</td><td><code>100ms</code></td><td>amount of time after which an executing statement is considered slow. Use 0 to disable.</td><td>Basic/Standard/Advanced/Self-Hosted</td></tr>
280280
<tr><td><div id="setting-sql-log-redact-names-enabled" class="anchored"><code>sql.log.redact_names.enabled</code></div></td><td>boolean</td><td><code>false</code></td><td>if set, schema object identifers are redacted in SQL statements that appear in event logs</td><td>Basic/Standard/Advanced/Self-Hosted</td></tr>
281+
<tr><td><div id="setting-sql-log-scan-row-count-misestimate-enabled" class="anchored"><code>sql.log.scan_row_count_misestimate.enabled</code></div></td><td>boolean</td><td><code>false</code></td><td>when set to true, log a warning when a scan&#39;s actual row count differs significantly from the optimizer&#39;s estimate</td><td>Basic/Standard/Advanced/Self-Hosted</td></tr>
281282
<tr><td><div id="setting-sql-log-slow-query-experimental-full-table-scans-enabled" class="anchored"><code>sql.log.slow_query.experimental_full_table_scans.enabled</code></div></td><td>boolean</td><td><code>false</code></td><td>when set to true, statements that perform a full table/index scan will be logged to the slow query log even if they do not meet the latency threshold. Must have the slow query log enabled for this setting to have any effect.</td><td>Basic/Standard/Advanced/Self-Hosted</td></tr>
282283
<tr><td><div id="setting-sql-log-slow-query-internal-queries-enabled" class="anchored"><code>sql.log.slow_query.internal_queries.enabled</code></div></td><td>boolean</td><td><code>false</code></td><td>when set to true, internal queries which exceed the slow query log threshold are logged to a separate log. Must have the slow query log enabled for this setting to have any effect.</td><td>Basic/Standard/Advanced/Self-Hosted</td></tr>
283284
<tr><td><div id="setting-sql-log-slow-query-latency-threshold" class="anchored"><code>sql.log.slow_query.latency_threshold</code></div></td><td>duration</td><td><code>0s</code></td><td>when set to non-zero, log statements whose service latency exceeds the threshold to a secondary logger on each node</td><td>Basic/Standard/Advanced/Self-Hosted</td></tr>

pkg/sql/colfetcher/colbatch_direct_scan.go

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,7 @@ func (s *ColBatchDirectScan) DrainMeta() []execinfrapb.ProducerMetadata {
127127
meta.Metrics = execinfrapb.GetMetricsMeta()
128128
meta.Metrics.BytesRead = s.GetBytesRead()
129129
meta.Metrics.RowsRead = s.GetRowsRead()
130+
meta.Metrics.StageID = s.stageID
130131
trailingMeta = append(trailingMeta, *meta)
131132
return trailingMeta
132133
}

pkg/sql/colfetcher/colbatch_scan.go

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -257,6 +257,7 @@ func (s *ColBatchScan) DrainMeta() []execinfrapb.ProducerMetadata {
257257
meta.Metrics = execinfrapb.GetMetricsMeta()
258258
meta.Metrics.BytesRead = s.GetBytesRead()
259259
meta.Metrics.RowsRead = s.GetRowsRead()
260+
meta.Metrics.StageID = s.stageID
260261
trailingMeta = append(trailingMeta, *meta)
261262
return trailingMeta
262263
}

pkg/sql/conn_executor_exec.go

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3352,6 +3352,10 @@ func (ex *connExecutor) execWithDistSQLEngine(
33523352
}
33533353
err = ex.server.cfg.DistSQLPlanner.PlanAndRunAll(ctx, evalCtx, planCtx, planner, recv, evalCtxFactory)
33543354
}
3355+
3356+
if err == nil && res.Err() == nil {
3357+
recv.maybeLogMisestimates(ctx, planner)
3358+
}
33553359
return recv.stats, err
33563360
}
33573361

pkg/sql/distsql_physical_planner.go

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@ import (
1111
"fmt"
1212
"reflect"
1313
"sort"
14+
"time"
1415

1516
"github.com/cockroachdb/cockroach/pkg/base"
1617
"github.com/cockroachdb/cockroach/pkg/cloud"
@@ -2245,6 +2246,7 @@ func (dsp *DistSQLPlanner) createTableReaders(
22452246
reverse: n.reverse,
22462247
parallelize: n.parallelize,
22472248
estimatedRowCount: n.estimatedRowCount,
2249+
statsCreatedAt: n.statsCreatedAt,
22482250
reqOrdering: n.reqOrdering,
22492251
finalizeLastStageCb: planCtx.associateWithPlanNode(n),
22502252
},
@@ -2263,6 +2265,7 @@ type tableReaderPlanningInfo struct {
22632265
reverse bool
22642266
parallelize bool
22652267
estimatedRowCount uint64
2268+
statsCreatedAt time.Time
22662269
reqOrdering ReqOrdering
22672270
finalizeLastStageCb func(*physicalplan.PhysicalPlan) // will be nil in the spec factory
22682271
}
@@ -2498,6 +2501,7 @@ func (dsp *DistSQLPlanner) planTableReaders(
24982501

24992502
corePlacement[i].SQLInstanceID = sp.SQLInstanceID
25002503
corePlacement[i].EstimatedRowCount = info.estimatedRowCount
2504+
corePlacement[i].StatsCreatedAt = info.statsCreatedAt
25012505
corePlacement[i].Core.TableReader = tr
25022506
}
25032507

pkg/sql/distsql_running.go

Lines changed: 136 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,8 @@ import (
2828
"github.com/cockroachdb/cockroach/pkg/rpc/rpcbase"
2929
"github.com/cockroachdb/cockroach/pkg/server/telemetry"
3030
"github.com/cockroachdb/cockroach/pkg/settings"
31+
"github.com/cockroachdb/cockroach/pkg/settings/cluster"
32+
"github.com/cockroachdb/cockroach/pkg/sql/catalog/descpb"
3133
"github.com/cockroachdb/cockroach/pkg/sql/colflow"
3234
"github.com/cockroachdb/cockroach/pkg/sql/distsql"
3335
"github.com/cockroachdb/cockroach/pkg/sql/execinfra"
@@ -47,10 +49,12 @@ import (
4749
"github.com/cockroachdb/cockroach/pkg/sql/sessiondatapb"
4850
"github.com/cockroachdb/cockroach/pkg/sql/sqlerrors"
4951
"github.com/cockroachdb/cockroach/pkg/sql/sqltelemetry"
52+
"github.com/cockroachdb/cockroach/pkg/sql/stats"
5053
"github.com/cockroachdb/cockroach/pkg/sql/types"
5154
"github.com/cockroachdb/cockroach/pkg/util/buildutil"
5255
"github.com/cockroachdb/cockroach/pkg/util/errorutil/unimplemented"
5356
"github.com/cockroachdb/cockroach/pkg/util/hlc"
57+
"github.com/cockroachdb/cockroach/pkg/util/humanizeutil"
5458
"github.com/cockroachdb/cockroach/pkg/util/interval"
5559
"github.com/cockroachdb/cockroach/pkg/util/log"
5660
"github.com/cockroachdb/cockroach/pkg/util/mon"
@@ -1114,6 +1118,10 @@ type DistSQLReceiver struct {
11141118

11151119
stats topLevelQueryStats
11161120

1121+
// scanStageEstimateMap maps stage IDs for logical scans to their
1122+
// corresponding scanStageEstimate.
1123+
scanStageEstimateMap map[int32]scanStageEstimate
1124+
11171125
// isTenantExplainAnalyze is used to indicate that network egress should be
11181126
// collected in order to estimate RU consumption for a tenant that is running
11191127
// a query with EXPLAIN ANALYZE.
@@ -1342,12 +1350,45 @@ func (c *CallbackResultWriter) Err() error {
13421350
return c.err
13431351
}
13441352

1353+
// scanStageEstimate holds the optimizer's row count estimate and table
1354+
// statistics metadata for a logical scan. It accumulates actual rows read
1355+
// across all distributed processors in the stage and is used for logging when
1356+
// the estimate is significantly off.
1357+
type scanStageEstimate struct {
1358+
estimatedRowCount uint64
1359+
statsCreatedAt time.Time
1360+
tableID descpb.ID
1361+
tableName string
1362+
indexName string
1363+
1364+
rowsRead uint64
1365+
}
1366+
1367+
var misestimateLogLimiter = log.Every(10 * time.Second)
1368+
1369+
func (s *scanStageEstimate) logMisestimate(ctx context.Context, refresher *stats.Refresher) {
1370+
var suffix string
1371+
if !s.statsCreatedAt.IsZero() {
1372+
timeSinceStats := timeutil.Since(s.statsCreatedAt)
1373+
suffix = fmt.Sprintf("; table stats collected %s ago",
1374+
humanizeutil.LongDuration(timeSinceStats))
1375+
staleness, err := refresher.EstimateStaleness(ctx, s.tableID)
1376+
if err == nil {
1377+
suffix += fmt.Sprintf(" (estimated %.0f%% stale)", staleness*100)
1378+
}
1379+
}
1380+
log.Dev.Warningf(ctx, "inaccurate estimate for scan on table %q (index %q): estimated=%d actual=%d%s",
1381+
s.tableName, s.indexName, s.estimatedRowCount, s.rowsRead, suffix)
1382+
}
1383+
13451384
var _ execinfra.RowReceiver = &DistSQLReceiver{}
13461385
var _ execinfra.BatchReceiver = &DistSQLReceiver{}
13471386

13481387
var receiverSyncPool = sync.Pool{
13491388
New: func() interface{} {
1350-
return &DistSQLReceiver{}
1389+
return &DistSQLReceiver{
1390+
scanStageEstimateMap: make(map[int32]scanStageEstimate),
1391+
}
13511392
},
13521393
}
13531394

@@ -1391,13 +1432,14 @@ func MakeDistSQLReceiver(
13911432
batchWriter: batchWriter,
13921433
// At the time of writing, there is only one concurrent goroutine that
13931434
// might send at most one error.
1394-
concurrentErrorCh: make(chan error, 1),
1395-
cleanup: cleanup,
1396-
rangeCache: rangeCache,
1397-
txn: txn,
1398-
clockUpdater: clockUpdater,
1399-
stmtType: stmtType,
1400-
tracing: tracing,
1435+
concurrentErrorCh: make(chan error, 1),
1436+
cleanup: cleanup,
1437+
rangeCache: rangeCache,
1438+
txn: txn,
1439+
clockUpdater: clockUpdater,
1440+
stmtType: stmtType,
1441+
tracing: tracing,
1442+
scanStageEstimateMap: r.scanStageEstimateMap,
14011443
}
14021444
return r
14031445
}
@@ -1416,6 +1458,9 @@ func (r *DistSQLReceiver) resetForLocalRerun(stats topLevelQueryStats) {
14161458
r.closed = false
14171459
r.stats = stats
14181460
r.egressCounter = nil
1461+
for k := range r.scanStageEstimateMap {
1462+
delete(r.scanStageEstimateMap, k)
1463+
}
14191464
if r.progressAtomic != nil {
14201465
atomic.StoreUint64(r.progressAtomic, math.Float64bits(0))
14211466
}
@@ -1424,7 +1469,12 @@ func (r *DistSQLReceiver) resetForLocalRerun(stats topLevelQueryStats) {
14241469
// Release releases this DistSQLReceiver back to the pool.
14251470
func (r *DistSQLReceiver) Release() {
14261471
r.cleanup()
1427-
*r = DistSQLReceiver{}
1472+
for k := range r.scanStageEstimateMap {
1473+
delete(r.scanStageEstimateMap, k)
1474+
}
1475+
*r = DistSQLReceiver{
1476+
scanStageEstimateMap: r.scanStageEstimateMap,
1477+
}
14281478
receiverSyncPool.Put(r)
14291479
}
14301480

@@ -1433,14 +1483,15 @@ func (r *DistSQLReceiver) Release() {
14331483
func (r *DistSQLReceiver) clone() *DistSQLReceiver {
14341484
ret := receiverSyncPool.Get().(*DistSQLReceiver)
14351485
*ret = DistSQLReceiver{
1436-
ctx: r.ctx,
1437-
concurrentErrorCh: make(chan error, 1),
1438-
cleanup: func() {},
1439-
rangeCache: r.rangeCache,
1440-
txn: r.txn,
1441-
clockUpdater: r.clockUpdater,
1442-
stmtType: tree.Rows,
1443-
tracing: r.tracing,
1486+
ctx: r.ctx,
1487+
concurrentErrorCh: make(chan error, 1),
1488+
cleanup: func() {},
1489+
rangeCache: r.rangeCache,
1490+
txn: r.txn,
1491+
clockUpdater: r.clockUpdater,
1492+
stmtType: tree.Rows,
1493+
tracing: r.tracing,
1494+
scanStageEstimateMap: ret.scanStageEstimateMap,
14441495
}
14451496
return ret
14461497
}
@@ -1544,6 +1595,12 @@ func (r *DistSQLReceiver) pushMeta(meta *execinfrapb.ProducerMetadata) execinfra
15441595
r.stats.bytesRead += meta.Metrics.BytesRead
15451596
r.stats.rowsRead += meta.Metrics.RowsRead
15461597
r.stats.rowsWritten += meta.Metrics.RowsWritten
1598+
1599+
if sm, ok := r.scanStageEstimateMap[meta.Metrics.StageID]; ok {
1600+
sm.rowsRead += uint64(meta.Metrics.RowsRead)
1601+
r.scanStageEstimateMap[meta.Metrics.StageID] = sm
1602+
}
1603+
15471604
if r.progressAtomic != nil && r.expectedRowsRead != 0 {
15481605
progress := float64(r.stats.rowsRead) / float64(r.expectedRowsRead)
15491606
atomic.StoreUint64(r.progressAtomic, math.Float64bits(progress))
@@ -1772,6 +1829,64 @@ func (r *DistSQLReceiver) ProducerDone() {
17721829
r.closed = true
17731830
}
17741831

1832+
func (r *DistSQLReceiver) makeScanEstimates(physPlan *PhysicalPlan, st *cluster.Settings) {
1833+
if !execinfra.LogScanRowCountMisestimate.Get(&st.SV) {
1834+
return
1835+
}
1836+
1837+
for _, p := range physPlan.Processors {
1838+
if p.Spec.Core.TableReader == nil {
1839+
continue
1840+
}
1841+
stageID := p.Spec.StageID
1842+
if _, exists := r.scanStageEstimateMap[stageID]; !exists {
1843+
r.scanStageEstimateMap[stageID] = scanStageEstimate{
1844+
estimatedRowCount: p.Spec.EstimatedRowCount,
1845+
statsCreatedAt: p.Spec.StatsCreatedAt,
1846+
tableID: p.Spec.Core.TableReader.FetchSpec.TableID,
1847+
tableName: p.Spec.Core.TableReader.FetchSpec.TableName,
1848+
indexName: p.Spec.Core.TableReader.FetchSpec.IndexName,
1849+
}
1850+
}
1851+
}
1852+
}
1853+
1854+
func (r *DistSQLReceiver) maybeLogMisestimates(ctx context.Context, planner *planner) {
1855+
if !execinfra.LogScanRowCountMisestimate.Get(&planner.ExecCfg().Settings.SV) {
1856+
return
1857+
}
1858+
1859+
checkedLimiter := false
1860+
for _, s := range r.scanStageEstimateMap {
1861+
actualRowCount := s.rowsRead
1862+
estimatedRowCount := s.estimatedRowCount
1863+
if estimatedRowCount == 0 {
1864+
continue
1865+
}
1866+
1867+
// Note: This is the same inaccuracy criteria as in explain/emit.go.
1868+
const inaccurateFactor = 2
1869+
const inaccurateAdditive = 100
1870+
inaccurateEstimate := actualRowCount*inaccurateFactor+inaccurateAdditive < estimatedRowCount ||
1871+
estimatedRowCount*inaccurateFactor+inaccurateAdditive < actualRowCount
1872+
if !inaccurateEstimate {
1873+
continue
1874+
}
1875+
if isSystemTable, err := planner.IsSystemTable(ctx, int64(s.tableID)); err != nil || isSystemTable {
1876+
continue
1877+
}
1878+
1879+
// Log all or none of the misestimated scans in the query.
1880+
if !checkedLimiter {
1881+
if !misestimateLogLimiter.ShouldLog() {
1882+
return
1883+
}
1884+
checkedLimiter = true
1885+
}
1886+
s.logMisestimate(ctx, planner.ExecCfg().StatsRefresher)
1887+
}
1888+
}
1889+
17751890
// getFinishedSetupFn returns a function to be passed into
17761891
// DistSQLPlanner.PlanAndRun or DistSQLPlanner.Run when running an "outer" plan
17771892
// that might create "inner" plans (e.g. apply join iterations). The returned
@@ -1959,6 +2074,7 @@ func (dsp *DistSQLPlanner) planAndRunSubquery(
19592074
// receiver, and use it and serialize the results of the subquery. The type
19602075
// of the results stored in the container depends on the type of the subquery.
19612076
subqueryRecv := recv.clone()
2077+
subqueryRecv.makeScanEstimates(subqueryPhysPlan, dsp.st)
19622078
defer subqueryRecv.Release()
19632079
defer recv.stats.add(&subqueryRecv.stats)
19642080
var typs []*types.T
@@ -2084,6 +2200,7 @@ func (dsp *DistSQLPlanner) planAndRunSubquery(
20842200
// with many duplicate elements.
20852201
subqueryResultMemAcc.Shrink(ctx, alreadyAccountedFor-actualSize)
20862202
}
2203+
subqueryRecv.maybeLogMisestimates(ctx, planner)
20872204
return nil
20882205
}
20892206

@@ -2131,6 +2248,7 @@ func (dsp *DistSQLPlanner) PlanAndRun(
21312248
} else {
21322249
finalizePlanWithRowCount(ctx, planCtx, physPlan, planCtx.planner.curPlan.mainRowCount)
21332250
recv.expectedRowsRead = int64(physPlan.TotalEstimatedScannedRows)
2251+
recv.makeScanEstimates(physPlan, dsp.st)
21342252
dsp.Run(ctx, planCtx, txn, physPlan, recv, &extEvalCtx.Context, finishedSetupFn)
21352253
}
21362254
if planCtx.isLocal {
@@ -2203,6 +2321,7 @@ func (dsp *DistSQLPlanner) PlanAndRun(
22032321
}
22042322
finalizePlanWithRowCount(ctx, localPlanCtx, localPhysPlan, localPlanCtx.planner.curPlan.mainRowCount)
22052323
recv.expectedRowsRead = int64(localPhysPlan.TotalEstimatedScannedRows)
2324+
recv.makeScanEstimates(localPhysPlan, dsp.st)
22062325
// We already called finishedSetupFn in the previous call to Run, since we
22072326
// only got here if we got a distributed error, not an error during setup.
22082327
dsp.Run(ctx, localPlanCtx, txn, localPhysPlan, recv, &extEvalCtx.Context, nil /* finishedSetupFn */)

pkg/sql/distsql_spec_exec_factory.go

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -362,6 +362,7 @@ func (e *distSQLSpecExecFactory) ConstructScan(
362362
reverse: params.Reverse,
363363
parallelize: params.Parallelize,
364364
estimatedRowCount: params.EstimatedRowCount,
365+
statsCreatedAt: params.StatsCreatedAt,
365366
reqOrdering: ReqOrdering(reqOrdering),
366367
},
367368
)

pkg/sql/execinfra/utils.go

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,3 +60,15 @@ var IncludeRUEstimateInExplainAnalyze = settings.RegisterBoolSetting(
6060
true,
6161
settings.WithName("sql.explain_analyze.include_ru_estimation.enabled"),
6262
)
63+
64+
// LogScanRowCountMisestimate controls whether we log a warning when the
65+
// actual row count observed by a scan differs significantly from the optimizer
66+
// estimate.
67+
var LogScanRowCountMisestimate = settings.RegisterBoolSetting(
68+
settings.ApplicationLevel,
69+
"sql.log.scan_row_count_misestimate.enabled",
70+
"when set to true, log a warning when a scan's actual row count differs "+
71+
"significantly from the optimizer's estimate",
72+
false,
73+
settings.WithPublic,
74+
)

pkg/sql/execinfrapb/data.proto

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -339,6 +339,13 @@ message RemoteProducerMetadata {
339339
optional int64 rows_read = 2 [(gogoproto.nullable) = false];
340340
// Total number of rows modified while executing a statement.
341341
optional int64 rows_written = 3 [(gogoproto.nullable) = false];
342+
// Stage identifier that produced these metrics. This is used to aggregate
343+
// row counts across distributed table reader processors for misestimate
344+
// logging. Note that other disk-reading processors (index and lookup joins,
345+
// inverted joins, zigzag joins) do not set this field. This is safe because
346+
// we enumerate physical plan stages starting at 1, so the default value of
347+
// 0 isn't conflated with any actual plan stage.
348+
optional int32 stage_id = 4 [(gogoproto.nullable) = false, (gogoproto.customname) = "StageID"];
342349
}
343350
oneof value {
344351
RangeInfos range_info = 1;

0 commit comments

Comments
 (0)