Skip to content

Conversation

@wolf31o2
Copy link
Member

@wolf31o2 wolf31o2 commented Nov 25, 2025


Summary by cubic

Adds an initial benchmark suite for the ledger and database to measure block processing, queries, and chain sync using real immutable test data. Adds profiling flags, Make targets, and a script to run benchmarks and record results with history.

  • New Features

    • Added benchmarks: UTxO/address lookups, tx history, pools/DReps/datums, protocol params, block retrieval/nonce, stake/pool registrations, era transitions, real block read/process, chain sync, block throughput, concurrent queries, memory usage, index building, and basic DB transaction creation.
    • Results tooling: generate_benchmarks.sh to run all benches and write benchmark_results.md with historical comparisons.
    • CLI profiling: --cpuprofile and --memprofile flags in dingo; new Make targets: bench and test-load-profile.
  • Refactors

    • Minor cleanups: interface{} to any in metadata, and log formatting in S3/GCS plugins.

Written for commit 3093541. Summary will update automatically on new commits.

Summary by CodeRabbit

  • New Features

    • Added CPU and memory profiling flags and new Makefile targets (including a default target) to run benchmarks and load profiling.
  • Tests

    • Added comprehensive benchmark suites covering database and ledger workloads and lightweight benchmark tests.
  • Documentation

    • Added benchmark results reporting and a script to run/collect/emit benchmark reports.
  • Chores

    • Minor formatting updates and modernized type usage.

✏️ Tip: You can customize this high-level summary in your review settings.

@wolf31o2 wolf31o2 requested a review from a team as a code owner November 25, 2025 03:05
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 25, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Adds CPU and memory profiling support to cmd/dingo/main.go via new --cpuprofile and --memprofile flags with early parsing to start/stop CPU profiling and write a heap profile at exit. Updates Makefile: adds all default, bench, and test-load-profile targets and extends .PHONY. Introduces extensive benchmark suites: ledger/benchmark_test.go (many real-data and in-memory benchmarks) and database/benchmark_test.go (transaction benchmark). Adds generate_benchmarks.sh and benchmark_results.md. Minor edits: logging formatting in blob plugins and parameter type change from interface{} to any in sqlite transaction helper.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • cmd/dingo/main.go: verify early flag parsing, lifecycle and error handling for pprof files, and that profiling integrates correctly with Cobra execution.
  • ledger/benchmark_test.go: review many benchmark setups, seeding from testdata, resource management, and correctness of real-data vs in-memory variants.
  • generate_benchmarks.sh & benchmark_results.md: validate parsing logic, output formatting, and file write/compare behavior.
  • Makefile: confirm new targets and .PHONY updates align with project CI/dev workflows.
  • database/benchmark_test.go and sqlite/transaction.go: ensure DB init and signature change (interface{} → any) compile and behave as intended.
  • database/plugin/blob/*/commit_timestamp.go: small logging reformat — check no accidental behavior change.

Pre-merge checks and finishing touches

✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title directly and accurately reflects the main objective of the changeset: introducing comprehensive benchmark suites for ledger and database packages with supporting infrastructure.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch test/benchmark-all-the-things

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
Makefile (1)

46-57: Add bench and test-load-profile to .PHONY declaration.

These new targets should be declared as phony to ensure they always run regardless of any files with matching names.

Apply this diff to line 16:

-.PHONY: build mod-tidy clean format golines test
+.PHONY: build mod-tidy clean format golines test bench test-load-profile
cmd/dingo/main.go (1)

152-165: Memory profile file may not close on error paths.

When os.Exit(1) is called on lines 157 or 162, the deferred f.Close() won't execute since os.Exit doesn't run defers. While this is minor (the OS will reclaim resources), consider using an explicit close pattern for consistency:

 	if memprofile != "" {
 		f, err := os.Create(memprofile)
 		if err != nil {
 			fmt.Fprintf(os.Stderr, "could not create memory profile: %v\n", err)
 			os.Exit(1)
 		}
-		defer f.Close()
 		if err := pprof.WriteHeapProfile(f); err != nil {
+			f.Close()
 			fmt.Fprintf(os.Stderr, "could not write memory profile: %v\n", err)
 			os.Exit(1)
 		}
+		f.Close()
 		fmt.Fprintf(os.Stderr, "Memory profiling complete\n")
 	}
database/benchmark_test.go (1)

22-36: Add b.ResetTimer() after setup and consider closing transactions.

The database initialization cost is being included in benchmark measurements. Additionally, read-only transactions should ideally be closed to prevent resource accumulation during long benchmark runs.

 func BenchmarkTransactionCreate(b *testing.B) {
 	// Create a temporary database
 	config := &Config{
 		DataDir: "", // In-memory
 	}
 	db, err := New(config)
 	if err != nil {
 		b.Fatal(err)
 	}
 	defer db.Close()

+	b.ResetTimer()
 	for b.Loop() {
 		txn := db.Transaction(false)
-		_ = txn
+		txn.Rollback() // Clean up transaction resources
 	}
 }
ledger/benchmark_test.go (2)

80-109: Add b.ResetTimer() after database setup.

The comment "Reset timer after setup" indicates the intent, but b.ResetTimer() is not actually called. This pattern is missing in multiple benchmarks throughout the file, causing setup time to be included in measurements.

 	// Reset timer after setup
-
+	b.ResetTimer()
 	// Benchmark lookup (on empty database for now)
 	for b.Loop() {

Apply this pattern to all benchmarks with similar setup phases (e.g., BenchmarkBlockRetrievalByIndex, BenchmarkTransactionHistoryQueries, etc.).


30-48: Placeholder benchmarks provide minimal value.

BenchmarkValidateTx is skipped, and BenchmarkTxSubmissionLatency / BenchmarkBlockProcessingThroughput only measure dummy allocations. Consider either implementing meaningful benchmarks or removing the placeholders to avoid misleading results in CI.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ed8339e and a00a782.

📒 Files selected for processing (8)
  • .github/workflows/go-test.yml (1 hunks)
  • Makefile (1 hunks)
  • cmd/dingo/main.go (5 hunks)
  • database/benchmark_test.go (1 hunks)
  • database/plugin/blob/aws/commit_timestamp.go (1 hunks)
  • database/plugin/blob/gcs/commit_timestamp.go (1 hunks)
  • database/plugin/metadata/sqlite/transaction.go (1 hunks)
  • ledger/benchmark_test.go (1 hunks)
🧰 Additional context used
🧠 Learnings (9)
📚 Learning: 2025-11-20T19:58:38.889Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T19:58:38.889Z
Learning: In Go, test dependencies (such as github.com/stretchr/testify) are correctly marked as direct dependencies in go.mod, not as indirect dependencies, even when they are only used in test files. This is a standard practice in Go's module system.

Applied to files:

  • Makefile
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:19:25.277Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:434-434
Timestamp: 2025-10-22T20:19:25.277Z
Learning: In this repo’s tests, use models.Block.Decode() to obtain a ledger.Block before calling chain.Chain.AddBlock, since AddBlock expects a ledger.Block, while LedgerState.GetBlock returns a database/models.Block.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: Unit tests under utxorpc should assert that models.Block.Decode() returns a github.com/blinklabs-io/gouroboros/ledger/common.Block, not rely on models.Block directly in protocol/ledger-facing logic.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: In utxorpc code paths, always operate on github.com/blinklabs-io/gouroboros/ledger/common.Block. Use database/models.Block only as the storage type and call models.Block.Decode() to obtain a ledger/common.Block before further processing.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:18:45.125Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T20:18:45.125Z
Learning: In tests, Chain.AddBlock must be given a gouroboros ledger.Block (not database/models.Block). Use a lightweight MockBlock (as in chain/chain_test.go) or call models.Block.Decode() when starting from stored rows.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: utxorpc tests must pass github.com/blinklabs-io/gouroboros/ledger/common.Block to chain.Chain.AddBlock. They should create database/models.Block only as a storage row, then call models.Block.Decode() to obtain a ledger/common.Block for AddBlock and any ledger/protocol-facing logic.

Applied to files:

  • ledger/benchmark_test.go
🧬 Code graph analysis (1)
ledger/benchmark_test.go (1)
database/models/block.go (1)
  • ErrBlockNotFound (23-23)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: nilaway
  • GitHub Check: go-test (1.25.x, macos-latest)
  • GitHub Check: go-test (1.24.x, ubuntu-latest)
  • GitHub Check: go-test (1.24.x, macos-latest)
  • GitHub Check: docker (ubuntu-latest, amd64)
  • GitHub Check: lint
  • GitHub Check: Analyze (go)
🔇 Additional comments (5)
database/plugin/blob/gcs/commit_timestamp.go (1)

51-54: Formatting improvement approved.

Converting the error log to multi-line parameterized format improves readability and consistency with the warning log above (lines 46–49). The error handling logic remains unchanged and appropriate for this migration fallback path.

database/plugin/blob/aws/commit_timestamp.go (1)

41-44: Formatting improvement consistent across providers.

This change mirrors the GCS file update and improves consistency by using multi-line parameterized logging format. The error handling remains unchanged and appropriate for this migration fallback scenario.

.github/workflows/go-test.yml (1)

30-31: LGTM! Benchmark step added to CI.

Running benchmarks in CI provides visibility into performance characteristics. Note that benchmark results on shared CI runners may have high variance; consider using -benchtime or -count flags for more stable comparisons, or only run benchmarks on specific triggers (e.g., releases) if CI time becomes a concern.

database/plugin/metadata/sqlite/transaction.go (1)

108-108: LGTM!

Using any instead of interface{} aligns with Go 1.18+ idiomatic style.

cmd/dingo/main.go (1)

79-108: LGTM! Early CPU profiling setup is correctly implemented.

Pre-parsing profiling flags before Cobra initialization ensures profiling captures the full application lifecycle. The deferred cleanup for CPU profiling is properly structured.

Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 8 files

Prompt for AI agents (all 3 issues)

Understand the root cause of the following 3 issues and fix them.


<file name=".github/workflows/go-test.yml">

<violation number="1" location=".github/workflows/go-test.yml:31">
`go test -bench` without `-run=^$` reruns the full unit test suite even though the previous step already ran `go test ./...`, doubling CI time. Add `-run=^$` to run only benchmarks here.</violation>
</file>

<file name="database/benchmark_test.go">

<violation number="1" location="database/benchmark_test.go:35">
Each benchmark iteration creates a transaction and never calls Commit/Rollback, so badger/gorm resources remain open until GC and the benchmark leaks transactions.</violation>
</file>

<file name="cmd/dingo/main.go">

<violation number="1" location="cmd/dingo/main.go:83">
`--cpuprofile`/`--memprofile` flags only work when typed as `--flag=value`; the standard Cobra form `--flag value` is parsed by Cobra but ignored here, so profiling silently never starts. Use the parsed flag values (or handle both syntaxes) when enabling profiles.</violation>
</file>

Reply to cubic to teach it or ask questions. Re-run a review with @cubic-dev-ai review this PR

- name: go-test
run: go test ./...
- name: go-bench
run: go test -bench=. -benchmem ./...
Copy link

@cubic-dev-ai cubic-dev-ai bot Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

go test -bench without -run=^$ reruns the full unit test suite even though the previous step already ran go test ./..., doubling CI time. Add -run=^$ to run only benchmarks here.

Prompt for AI agents
Address the following comment on .github/workflows/go-test.yml at line 31:

<comment>`go test -bench` without `-run=^$` reruns the full unit test suite even though the previous step already ran `go test ./...`, doubling CI time. Add `-run=^$` to run only benchmarks here.</comment>

<file context>
@@ -27,3 +27,5 @@ jobs:
       - name: go-test
         run: go test ./...
+      - name: go-bench
+        run: go test -bench=. -benchmem ./...
</file context>
Suggested change
run: go test -bench=. -benchmem ./...
run: go test -run=^$ -bench=. -benchmem ./...
Fix with Cubic


for b.Loop() {
txn := db.Transaction(false)
_ = txn
Copy link

@cubic-dev-ai cubic-dev-ai bot Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each benchmark iteration creates a transaction and never calls Commit/Rollback, so badger/gorm resources remain open until GC and the benchmark leaks transactions.

Prompt for AI agents
Address the following comment on database/benchmark_test.go at line 35:

<comment>Each benchmark iteration creates a transaction and never calls Commit/Rollback, so badger/gorm resources remain open until GC and the benchmark leaks transactions.</comment>

<file context>
@@ -0,0 +1,37 @@
+
+	for b.Loop() {
+		txn := db.Transaction(false)
+		_ = txn
+	}
+}
</file context>
Fix with Cubic

cpuprofile := ""
memprofile := ""
for _, arg := range os.Args {
if after, ok := strings.CutPrefix(arg, "--cpuprofile="); ok {
Copy link

@cubic-dev-ai cubic-dev-ai bot Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--cpuprofile/--memprofile flags only work when typed as --flag=value; the standard Cobra form --flag value is parsed by Cobra but ignored here, so profiling silently never starts. Use the parsed flag values (or handle both syntaxes) when enabling profiles.

Prompt for AI agents
Address the following comment on cmd/dingo/main.go at line 83:

<comment>`--cpuprofile`/`--memprofile` flags only work when typed as `--flag=value`; the standard Cobra form `--flag value` is parsed by Cobra but ignored here, so profiling silently never starts. Use the parsed flag values (or handle both syntaxes) when enabling profiles.</comment>

<file context>
@@ -72,6 +76,37 @@ func commonRun() *slog.Logger {
+	cpuprofile := &quot;&quot;
+	memprofile := &quot;&quot;
+	for _, arg := range os.Args {
+		if after, ok := strings.CutPrefix(arg, &quot;--cpuprofile=&quot;); ok {
+			cpuprofile = after
+		}
</file context>
Fix with Cubic

@wolf31o2 wolf31o2 force-pushed the test/benchmark-all-the-things branch from a00a782 to ff13c56 Compare November 25, 2025 11:26
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
cmd/dingo/main.go (1)

42-44: Profiling flags still ignore --flag value form and Cobra-parsed values

The profiling flags behave inconsistently:

  • The manual scan of os.Args only handles --cpuprofile=<path> / --memprofile=<path> and ignores the common Cobra style --cpuprofile <path> / --memprofile <path>. In that more typical form, cpuprofile/memprofile remain empty, so profiling silently never starts, even though Cobra accepts the flags and shows them in help.
  • rootCmd.PersistentFlags().StringVar(&globalFlags.cpuprofile, ...) / memprofile register flags and populate globalFlags, but the actual profiling logic only uses the locally parsed cpuprofile/memprofile variables, so Cobra’s parsed values are effectively ignored.

This is the same functional issue raised in the earlier review comment: users who type flags in standard Cobra form will not get profiles, which is surprising and hard to diagnose.

A minimal fix that stays with the current “pre-Cobra parse” design is to support both syntaxes when scanning os.Args. For example:

 func main() {
-	// Parse profiling flags before cobra setup
-	cpuprofile := ""
-	memprofile := ""
-	for _, arg := range os.Args {
-		if after, ok := strings.CutPrefix(arg, "--cpuprofile="); ok {
-			cpuprofile = after
-		}
-		if after, ok := strings.CutPrefix(arg, "--memprofile="); ok {
-			memprofile = after
-		}
-	}
+	// Parse profiling flags before cobra setup, supporting both
+	// `--flag=value` and `--flag value` forms.
+	cpuprofile := ""
+	memprofile := ""
+	args := os.Args[1:] // skip program name
+	for i := 0; i < len(args); i++ {
+		arg := args[i]
+
+		if after, ok := strings.CutPrefix(arg, "--cpuprofile="); ok {
+			cpuprofile = after
+			continue
+		}
+		if arg == "--cpuprofile" && i+1 < len(args) {
+			cpuprofile = args[i+1]
+			i++
+			continue
+		}
+
+		if after, ok := strings.CutPrefix(arg, "--memprofile="); ok {
+			memprofile = after
+			continue
+		}
+		if arg == "--memprofile" && i+1 < len(args) {
+			memprofile = args[i+1]
+			i++
+			continue
+		}
+	}

Optionally (for a cleaner design), you could instead drop the manual os.Args parsing and use globalFlags.cpuprofile / globalFlags.memprofile after Cobra has parsed flags (e.g., start CPU profiling in a PersistentPreRunE using those values and stop it after Execute() returns), but the diff above is the smallest change that makes both flag forms work as expected.

Also applies to: 79-89, 91-108, 127-130, 151-165

ledger/benchmark_test.go (1)

1590-1666: Fix chain‑sync benchmark iterator lifecycle and per‑iteration logic

In BenchmarkChainSyncFromGenesis:

  • The iterator from immDb.BlocksFromPoint(genesisPoint) is never closed (unlike other benchmarks in this file that defer iterator.Close()), so the immutable DB cursor can leak resources. This matches a previously raised comment on this region.
  • blocksProcessed is declared outside the for b.Loop() and you break the outer loop once it reaches 100 (Lines 1618–1660). That means the benchmark only ever processes ~100 blocks total, rather than “up to 100 blocks per benchmark iteration” as the comment suggests. This is the same logical issue flagged in the earlier review.

A more robust pattern is:

  • Create and close an iterator per benchmark iteration (or at least reset it per iteration).
  • Track the count per iteration, then aggregate/report however you prefer.

For example:

-	// Start from genesis (origin point)
-	genesisPoint := ocommon.NewPointOrigin()
-	iterator, err := immDb.BlocksFromPoint(genesisPoint)
-	if err != nil {
-		b.Fatal(err)
-	}
-
-	// Reset timer after setup
-
-	// Process blocks sequentially (simulate chain sync)
-	// Each benchmark iteration processes up to 100 blocks
-	blocksProcessed := 0
-	for b.Loop() {
-		block, err := iterator.Next()
-		...
-		if block == nil {
-			// End of chain reached, reset iterator for next benchmark iteration
-			iterator, err = immDb.BlocksFromPoint(genesisPoint)
-			...
-			continue
-		}
-		...
-		blocksProcessed++
-		if blocksProcessed >= 100 {
-			break
-		}
-	}
-
-	b.ReportMetric(float64(blocksProcessed), "blocks_processed")
+	genesisPoint := ocommon.NewPointOrigin()
+
+	var totalBlocksProcessed uint64
+	for b.Loop() {
+		iterator, err := immDb.BlocksFromPoint(genesisPoint)
+		if err != nil {
+			b.Fatal(err)
+		}
+
+		blocksProcessed := 0
+		for blocksProcessed < 100 {
+			block, err := iterator.Next()
+			if err != nil {
+				b.Fatal(err)
+			}
+			if block == nil {
+				break // end of chain
+			}
+
+			ledgerBlock, err := ledger.NewBlockFromCbor(block.Type, block.Cbor)
+			if err != nil {
+				continue
+			}
+
+			blockModel := models.Block{
+				ID:       block.Slot,
+				Slot:     block.Slot,
+				Hash:     block.Hash,
+				Number:   totalBlocksProcessed + uint64(blocksProcessed),
+				Type:     uint(ledgerBlock.Type()),
+				PrevHash: ledgerBlock.PrevHash().Bytes(),
+				Cbor:     ledgerBlock.Cbor(),
+			}
+			if err := db.BlockCreate(blockModel, nil); err == nil {
+				blocksProcessed++
+			}
+		}
+
+		iterator.Close()
+		totalBlocksProcessed += uint64(blocksProcessed)
+	}
+
+	b.ReportMetric(float64(totalBlocksProcessed), "blocks_processed")

Feel free to adjust the aggregation/reporting, but the key is to avoid reusing a single long‑lived iterator without closing it and to avoid breaking out of for b.Loop() after a fixed global block count. Based on learnings, once you hook this up to a real chain.Chain you’ll likely want to feed ledger.Block values into Chain.AddBlock instead of writing models.Block directly via BlockCreate.

🧹 Nitpick comments (1)
ledger/benchmark_test.go (1)

100-101: Either call b.ResetTimer() or drop the “Reset timer” comments

Many benchmarks have comments like “Reset timer after setup/seeding” but do not actually call b.ResetTimer(); e.g., lines 100, 172, 195, 288, 360, 390, 462, 492, 564, 598, 667, 699, 771, 799, 865, 900, 975, 1010, 1083, 1117, 1190, 1247, 1343, 1383, 1411, 1475, 1576, 1614.

With the b.Loop() style, any code before the loop still contributes once to the measured time. If you intend to exclude setup/seeding from the timings, add an explicit b.ResetTimer() right before each for b.Loop() { ... }. Otherwise, consider removing or rewording the comments to avoid misleading future readers.

Also applies to: 172-173, 195-196, 288-289, 360-361, 390-391, 462-463, 492-493, 564-565, 598-599, 667-668, 699-700, 771-772, 799-800, 865-866, 900-901, 975-976, 1010-1011, 1083-1084, 1117-1118, 1190-1191, 1247-1248, 1343-1344, 1383-1384, 1411-1412, 1475-1476, 1576-1577, 1614-1615

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a00a782 and ff13c56.

📒 Files selected for processing (7)
  • Makefile (1 hunks)
  • cmd/dingo/main.go (5 hunks)
  • database/benchmark_test.go (1 hunks)
  • database/plugin/blob/aws/commit_timestamp.go (1 hunks)
  • database/plugin/blob/gcs/commit_timestamp.go (1 hunks)
  • database/plugin/metadata/sqlite/transaction.go (1 hunks)
  • ledger/benchmark_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • database/plugin/blob/aws/commit_timestamp.go
  • Makefile
  • database/benchmark_test.go
  • database/plugin/metadata/sqlite/transaction.go
🧰 Additional context used
🧠 Learnings (8)
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:19:25.277Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:434-434
Timestamp: 2025-10-22T20:19:25.277Z
Learning: In this repo’s tests, use models.Block.Decode() to obtain a ledger.Block before calling chain.Chain.AddBlock, since AddBlock expects a ledger.Block, while LedgerState.GetBlock returns a database/models.Block.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: Unit tests under utxorpc should assert that models.Block.Decode() returns a github.com/blinklabs-io/gouroboros/ledger/common.Block, not rely on models.Block directly in protocol/ledger-facing logic.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: In utxorpc code paths, always operate on github.com/blinklabs-io/gouroboros/ledger/common.Block. Use database/models.Block only as the storage type and call models.Block.Decode() to obtain a ledger/common.Block before further processing.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:18:45.125Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T20:18:45.125Z
Learning: In tests, Chain.AddBlock must be given a gouroboros ledger.Block (not database/models.Block). Use a lightweight MockBlock (as in chain/chain_test.go) or call models.Block.Decode() when starting from stored rows.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: utxorpc tests must pass github.com/blinklabs-io/gouroboros/ledger/common.Block to chain.Chain.AddBlock. They should create database/models.Block only as a storage row, then call models.Block.Decode() to obtain a ledger/common.Block for AddBlock and any ledger/protocol-facing logic.

Applied to files:

  • ledger/benchmark_test.go
🧬 Code graph analysis (1)
ledger/benchmark_test.go (1)
database/models/block.go (1)
  • ErrBlockNotFound (23-23)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Analyze (go)
  • GitHub Check: nilaway
  • GitHub Check: lint
  • GitHub Check: docker (ubuntu-latest, amd64)
  • GitHub Check: go-test (1.24.x, ubuntu-latest)
  • GitHub Check: go-test (1.24.x, macos-latest)
🔇 Additional comments (4)
database/plugin/blob/gcs/commit_timestamp.go (1)

51-54: Logging reformat is fine and behavior-preserving

The multi-line Errorf call preserves the original message and arguments and just improves readability; no functional change introduced here.

cmd/dingo/main.go (1)

21-22: Profiling-related imports look appropriate

Adding runtime/pprof and strings here is consistent with the new profiling logic in main and is scoped correctly.

ledger/benchmark_test.go (2)

56-61: Fix invalid range usage over integer literals to make benchmarks compile

Lines around 58, 1460, and 1547 use for ... := range 100 / for ... := range 5, which is not valid Go (range cannot iterate an integer literal). These functions will not compile as-is.

Use standard counted loops instead, e.g.:

 func BenchmarkBlockProcessingThroughput(b *testing.B) {
@@
-	for b.Loop() {
-		// Dummy processing: simulate processing a block
-		for j := range 100 {
-			_ = j * j
-		}
-	}
+	for b.Loop() {
+		// Dummy processing: simulate processing a block
+		for j := 0; j < 100; j++ {
+			_ = j * j
+		}
+	}
 }
@@
-	// Get the first 5 blocks
-	var realBlocks []*immutable.Block
-	for i := range 5 {
+	// Get the first 5 blocks
+	var realBlocks []*immutable.Block
+	for i := 0; i < 5; i++ {
 		block, err := iterator.Next()
@@
-	// Load and store 100 real blocks
-	for i := range 100 {
+	// Load and store 100 real blocks
+	for i := 0; i < 100; i++ {
 		block, err := iterator.Next()

Also applies to: 1458-1468, 1546-1554

⛔ Skipped due to learnings
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:56-56
Timestamp: 2025-10-22T20:13:35.164Z
Learning: In tests under ledger/state_test.go, PrimaryChain.AddBlock expects a "github.com/blinklabs-io/gouroboros/ledger/common".Block; use models.Block.Decode() to obtain that type before calling AddBlock.
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T20:18:45.125Z
Learning: In tests, Chain.AddBlock must be given a gouroboros ledger.Block (not database/models.Block). Use a lightweight MockBlock (as in chain/chain_test.go) or call models.Block.Decode() when starting from stored rows.
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:434-434
Timestamp: 2025-10-22T20:19:25.277Z
Learning: In this repo’s tests, use models.Block.Decode() to obtain a ledger.Block before calling chain.Chain.AddBlock, since AddBlock expects a ledger.Block, while LedgerState.GetBlock returns a database/models.Block.
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: Unit tests under utxorpc should assert that models.Block.Decode() returns a github.com/blinklabs-io/gouroboros/ledger/common.Block, not rely on models.Block directly in protocol/ledger-facing logic.
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: utxorpc tests must pass github.com/blinklabs-io/gouroboros/ledger/common.Block to chain.Chain.AddBlock. They should create database/models.Block only as a storage row, then call models.Block.Decode() to obtain a ledger/common.Block for AddBlock and any ledger/protocol-facing logic.

80-109: Incorrect review: metadata database methods don't return "record not found" errors

The review assumes that GetTransactionByHash, GetAccount, GetPool, GetStakeRegistrations, GetPoolRegistrations, and GetPParams return errors on missing records. However, verification shows this is not the case:

  • Methods using First() (GetAccount, GetTransactionByHash, GetPool) explicitly catch gorm.ErrRecordNotFound and return (nil, nil) instead of propagating an error
  • Methods using Find() (GetStakeRegistrations, GetPoolRegistrations, GetPParams, GetUtxosByAddress) return nil error on empty result sets per GORM v2 behavior—errors are only returned for actual database failures, not missing records

Therefore, benchmarks on empty/minimally-seeded databases will succeed and return empty/nil results, not fail fatally. The inconsistent error handling pattern described in the review does not reflect how these methods actually behave, and no changes are needed.

Likely an incorrect or invalid review comment.

@wolf31o2 wolf31o2 force-pushed the test/benchmark-all-the-things branch from ff13c56 to a9365e8 Compare November 25, 2025 11:41
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
database/plugin/blob/aws/commit_timestamp.go (1)

41-44: LGTM! Formatting improvement for consistency.

This multi-line formatting of the error log improves readability and is consistent with similar changes in the GCS plugin.

ledger/benchmark_test.go (2)

31-44: Consider enhancing with real block data.

This benchmark currently uses dummy allocation patterns. Consider using real block data from the testdata to measure actual memory usage during block processing.


1336-1360: Consider implementing actual index building benchmark.

The current implementation only measures transaction overhead (as noted in the comment). Consider implementing the actual index building logic to make this benchmark meaningful.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ff13c56 and a9365e8.

📒 Files selected for processing (7)
  • Makefile (2 hunks)
  • cmd/dingo/main.go (5 hunks)
  • database/benchmark_test.go (1 hunks)
  • database/plugin/blob/aws/commit_timestamp.go (1 hunks)
  • database/plugin/blob/gcs/commit_timestamp.go (1 hunks)
  • database/plugin/metadata/sqlite/transaction.go (1 hunks)
  • ledger/benchmark_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • database/plugin/blob/gcs/commit_timestamp.go
  • database/plugin/metadata/sqlite/transaction.go
🧰 Additional context used
🧠 Learnings (9)
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.

Applied to files:

  • database/benchmark_test.go
  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:19:25.277Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:434-434
Timestamp: 2025-10-22T20:19:25.277Z
Learning: In this repo’s tests, use models.Block.Decode() to obtain a ledger.Block before calling chain.Chain.AddBlock, since AddBlock expects a ledger.Block, while LedgerState.GetBlock returns a database/models.Block.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.164Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:56-56
Timestamp: 2025-10-22T20:13:35.164Z
Learning: In tests under ledger/state_test.go, PrimaryChain.AddBlock expects a "github.com/blinklabs-io/gouroboros/ledger/common".Block; use models.Block.Decode() to obtain that type before calling AddBlock.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: Unit tests under utxorpc should assert that models.Block.Decode() returns a github.com/blinklabs-io/gouroboros/ledger/common.Block, not rely on models.Block directly in protocol/ledger-facing logic.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:18:45.125Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T20:18:45.125Z
Learning: In tests, Chain.AddBlock must be given a gouroboros ledger.Block (not database/models.Block). Use a lightweight MockBlock (as in chain/chain_test.go) or call models.Block.Decode() when starting from stored rows.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: utxorpc tests must pass github.com/blinklabs-io/gouroboros/ledger/common.Block to chain.Chain.AddBlock. They should create database/models.Block only as a storage row, then call models.Block.Decode() to obtain a ledger/common.Block for AddBlock and any ledger/protocol-facing logic.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-11-20T19:58:38.889Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T19:58:38.889Z
Learning: In Go, test dependencies (such as github.com/stretchr/testify) are correctly marked as direct dependencies in go.mod, not as indirect dependencies, even when they are only used in test files. This is a standard practice in Go's module system.

Applied to files:

  • Makefile
🧬 Code graph analysis (1)
ledger/benchmark_test.go (1)
database/models/block.go (1)
  • ErrBlockNotFound (23-23)
🪛 checkmake (0.2.2)
Makefile

[warning] 16-16: Missing required phony target "all"

(minphony)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: lint
  • GitHub Check: nilaway
  • GitHub Check: docker (ubuntu-latest, amd64)
  • GitHub Check: go-test (1.24.x, macos-latest)
  • GitHub Check: go-test (1.24.x, ubuntu-latest)
  • GitHub Check: Analyze (go)
🔇 Additional comments (7)
Makefile (2)

16-16: LGTM! Benchmark target properly configured.

The bench target correctly uses -run=^$$ to skip regular tests and runs all benchmarks with memory statistics.

Also applies to: 46-47


53-58: LGTM! Profiling target properly configured.

The test-load-profile target correctly builds the binary and runs it with both CPU and memory profiling flags, with helpful guidance for users.

database/benchmark_test.go (1)

22-38: LGTM! Transaction benchmark properly implemented.

The benchmark correctly creates and commits read-only transactions. The transaction leak issue flagged in the previous review has been addressed by adding txn.Commit() on line 36.

cmd/dingo/main.go (3)

79-97: LGTM! Flag parsing handles both syntaxes correctly.

The manual flag parsing before Cobra setup now handles both --flag=value and --flag value syntaxes (lines 86-90 for cpuprofile, 92-95 for memprofile), addressing the issue from the previous review.


99-116: LGTM! CPU profiling properly initialized.

The CPU profiling setup correctly:

  • Creates the profile file with error handling
  • Starts profiling before Cobra execution
  • Defers stopping profiling to ensure cleanup
  • Provides user-facing messages

160-174: LGTM! Memory profiling properly finalized.

The memory profiling correctly captures the heap state at program end with proper error handling and explicit file closure on error paths.

ledger/benchmark_test.go (1)

46-76: LGTM! Benchmark properly structured.

The benchmark correctly resets the timer after setup and appropriately benchmarks lookups against an empty database as documented.

Comment on lines 1575 to 1798
// Start from genesis (origin point)
genesisPoint := ocommon.NewPointOrigin()
iterator, err := immDb.BlocksFromPoint(genesisPoint)
if err != nil {
b.Fatal(err)
}
defer iterator.Close() // Ensure iterator is closed

// Reset timer after setup

// Process blocks sequentially (simulate chain sync)
// Each benchmark iteration processes up to 100 blocks
for b.Loop() {
// Reset iterator for each benchmark iteration
iterator, err = immDb.BlocksFromPoint(genesisPoint)
if err != nil {
b.Fatal(err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix resource leak: initial iterator is never used.

The iterator created at line 1577 is immediately replaced by a new iterator inside the benchmark loop at line 1589, causing the initial iterator to leak. Remove the initial iterator creation and defer, as they serve no purpose.

Apply this diff:

 	// Start from genesis (origin point)
 	genesisPoint := ocommon.NewPointOrigin()
-	iterator, err := immDb.BlocksFromPoint(genesisPoint)
-	if err != nil {
-		b.Fatal(err)
-	}
-	defer iterator.Close() // Ensure iterator is closed
 
 	// Reset timer after setup
+	b.ResetTimer()
 
 	// Process blocks sequentially (simulate chain sync)
 	// Each benchmark iteration processes up to 100 blocks
 	for b.Loop() {
 		// Reset iterator for each benchmark iteration
-		iterator, err = immDb.BlocksFromPoint(genesisPoint)
+		iterator, err := immDb.BlocksFromPoint(genesisPoint)
 		if err != nil {
 			b.Fatal(err)
 		}

Note: This also adds the missing b.ResetTimer() call mentioned in the earlier comment.

🤖 Prompt for AI Agents
In ledger/benchmark_test.go around lines 1575 to 1592, an iterator is created
before the benchmark loop then immediately replaced inside the loop, leaking the
initial iterator; remove the initial immDb.BlocksFromPoint(genesisPoint) call
and its defer iterator.Close(), and instead create the iterator only inside the
loop where it is currently created, ensuring you defer Close() for that iterator
(or explicitly Close() at end of each iteration) and add the missing
b.ResetTimer() after setup and before the benchmark loop begins.

@wolf31o2 wolf31o2 force-pushed the test/benchmark-all-the-things branch from a9365e8 to 44603d4 Compare November 25, 2025 14:04
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
ledger/benchmark_test.go (1)

302-304: Add the missing b.ResetTimer() calls where comments say “Reset timer after setup”

Several benchmarks have a // Reset timer after setup (or seeding) comment but never actually call b.ResetTimer(). That means the one-time setup/seeding work is included in the measured time, which will skew the results and make comparisons with other benchmarks (where you do reset the timer) misleading.

The affected spots are:

  • BenchmarkTransactionHistoryQueries (Line 302)
  • BenchmarkAccountLookupByStakeKey (Line 405)
  • BenchmarkPoolLookupByKeyHash (Line 508)
  • BenchmarkDRepLookupByKeyHash (Line 611)
  • BenchmarkDatumLookupByHash (Line 716)
  • BenchmarkProtocolParametersLookupByEpoch (Line 818)
  • BenchmarkBlockNonceLookup (Line 920)
  • BenchmarkStakeRegistrationLookups (Line 1031)
  • BenchmarkPoolRegistrationLookups (Line 1139)
  • BenchmarkEraTransitionPerformance (Line 1270)
  • BenchmarkRealBlockReading (Line 1487)
  • BenchmarkRealBlockProcessing (Line 1551)

In each case, insert b.ResetTimer() immediately after setup/seeding and before the for b.Loop():

-	// Reset timer after setup
-
-	// Benchmark lookup (on empty database for now)
+	// Reset timer after setup
+	b.ResetTimer()
+
+	// Benchmark lookup (on empty database for now)

(Adapt comment text for the specific benchmark.)

This will align these benchmarks with the others in this file where the timer is already reset correctly after seeding.

You can verify coverage with something like:

#!/bin/bash
# List benchmarks and quickly grep for missing ResetTimer() after setup comments.
rg -n "Reset timer after setup" ledger/benchmark_test.go -n -C2
rg -n "func Benchmark" ledger/benchmark_test.go

Check that each “Reset timer” comment is immediately followed by b.ResetTimer() and that no benchmark timings unintentionally include heavy seeding work.

Also applies to: 405-407, 508-510, 611-613, 716-718, 818-820, 920-922, 1031-1033, 1139-1141, 1270-1272, 1487-1489, 1551-1553

🧹 Nitpick comments (3)
Makefile (1)

16-16: Optional: consider adding a conventional all target or relaxing checkmake rule

checkmake warns about missing PHONY target all. If you care about that check, you could either:

  • add a simple all target (e.g., all: build test bench), or
  • tune/disable that specific rule in your checkmake config.

This is non-blocking; current PHONY set is otherwise fine.

database/benchmark_test.go (1)

21-37: Benchmark shape is solid; consider asserting on Commit failures

The benchmark now correctly:

  • Creates a short-lived in-memory DB,
  • Calls b.ResetTimer() after setup, and
  • Closes each transaction in the loop, avoiding the previous leak.

One potential improvement: if txn.Commit() returns an error, it should probably be checked and treated as a benchmark failure rather than silently ignored, so you don’t keep measuring in a broken state.

For example:

-	for b.Loop() {
-		txn := db.Transaction(false)
-		txn.Commit() // Close the transaction to prevent leaks
-	}
+	for b.Loop() {
+		txn := db.Transaction(false)
+		if err := txn.Commit(); err != nil {
+			b.Fatal(err)
+		}
+	}

You may want to confirm the signature/semantics of (*Txn).Commit in database/txn.go to ensure this change matches the intended API (i.e., that it does in fact return an error on failure).

ledger/benchmark_test.go (1)

134-173: Consider factoring out shared “seed from immutable testdata” helpers

The real-data benchmarks repeatedly implement very similar seeding logic:

  • Open immutable DB from ../database/immutable/testdata.
  • Sample a handful of slots.
  • Convert immutable blocks to ledger blocks with ledger.NewBlockFromCbor.
  • Build models.Block values and call db.BlockCreate.

This pattern appears (with minor variations) in many places (e.g., the UTxO, block index, tx history, accounts, pools, DReps, datums, pparams, nonce, stake/pool registrations, era transitions, real data queries, and chain sync benchmarks).

The duplication makes it easy for subtle fixes (e.g., to sampling strategy, ID assignment, or error handling) to go out of sync across benchmarks.

You might consider introducing a small set of helpers in this package—for example:

  • openImmutableTestDB(b *testing.B) *immutable.DB
  • seedBlocksFromSlots(b *testing.B, db *database.Database, immDb *immutable.DB, slots []uint64) error
  • or a more generic seedBlocks(b, db, immDb, count int) ([]*immutable.Block, error)

and reusing them across the real-data benchmarks. That would centralize the seeding behavior and make future changes easier and less error-prone.

Also applies to: 232-269, 325-362, 428-465, 531-568, 635-672, 740-777, 842-879, 945-982, 1055-1092, 1163-1200, 1306-1354, 1403-1467, 1525-1544, 1613-1650

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a9365e8 and 44603d4.

📒 Files selected for processing (7)
  • Makefile (2 hunks)
  • cmd/dingo/main.go (5 hunks)
  • database/benchmark_test.go (1 hunks)
  • database/plugin/blob/aws/commit_timestamp.go (1 hunks)
  • database/plugin/blob/gcs/commit_timestamp.go (1 hunks)
  • database/plugin/metadata/sqlite/transaction.go (1 hunks)
  • ledger/benchmark_test.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • database/plugin/blob/aws/commit_timestamp.go
  • cmd/dingo/main.go
🧰 Additional context used
🧠 Learnings (10)
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.

Applied to files:

  • database/benchmark_test.go
  • ledger/benchmark_test.go
📚 Learning: 2025-11-20T19:58:38.889Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 0
File: :0-0
Timestamp: 2025-11-20T19:58:38.889Z
Learning: In Go, test dependencies (such as github.com/stretchr/testify) are correctly marked as direct dependencies in go.mod, not as indirect dependencies, even when they are only used in test files. This is a standard practice in Go's module system.

Applied to files:

  • Makefile
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:19:25.277Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:434-434
Timestamp: 2025-10-22T20:19:25.277Z
Learning: In this repo’s tests, use models.Block.Decode() to obtain a ledger.Block before calling chain.Chain.AddBlock, since AddBlock expects a ledger.Block, while LedgerState.GetBlock returns a database/models.Block.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.164Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:56-56
Timestamp: 2025-10-22T20:13:35.164Z
Learning: In tests under ledger/state_test.go, PrimaryChain.AddBlock expects a "github.com/blinklabs-io/gouroboros/ledger/common".Block; use models.Block.Decode() to obtain that type before calling AddBlock.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:18:45.125Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T20:18:45.125Z
Learning: In tests, Chain.AddBlock must be given a gouroboros ledger.Block (not database/models.Block). Use a lightweight MockBlock (as in chain/chain_test.go) or call models.Block.Decode() when starting from stored rows.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: Unit tests under utxorpc should assert that models.Block.Decode() returns a github.com/blinklabs-io/gouroboros/ledger/common.Block, not rely on models.Block directly in protocol/ledger-facing logic.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: utxorpc tests must pass github.com/blinklabs-io/gouroboros/ledger/common.Block to chain.Chain.AddBlock. They should create database/models.Block only as a storage row, then call models.Block.Decode() to obtain a ledger/common.Block for AddBlock and any ledger/protocol-facing logic.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: In utxorpc code paths, always operate on github.com/blinklabs-io/gouroboros/ledger/common.Block. Use database/models.Block only as the storage type and call models.Block.Decode() to obtain a ledger/common.Block before further processing.

Applied to files:

  • ledger/benchmark_test.go
🧬 Code graph analysis (1)
ledger/benchmark_test.go (1)
database/models/block.go (1)
  • ErrBlockNotFound (23-23)
🪛 checkmake (0.2.2)
Makefile

[warning] 16-16: Missing required phony target "all"

(minphony)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: lint
  • GitHub Check: docker (ubuntu-latest, amd64)
  • GitHub Check: go-test (1.24.x, ubuntu-latest)
  • GitHub Check: nilaway
  • GitHub Check: go-test (1.24.x, macos-latest)
  • GitHub Check: Analyze (go)
🔇 Additional comments (3)
database/plugin/blob/gcs/commit_timestamp.go (1)

51-54: Formatting-only change keeps behavior identical

Reformatting the Errorf call into multi-line style preserves semantics and improves readability, matching the surrounding logging style. No further changes needed here.

database/plugin/metadata/sqlite/transaction.go (1)

108-108: LGTM! Good modernization to idiomatic Go 1.18+ syntax.

The change from interface{} to any aligns with modern Go conventions while maintaining identical semantics.

Makefile (1)

46-48: No changes required; benchmarks and profiling targets function as intended.

Verification confirms the bench and test-load-profile targets work correctly. Benchmarks are discoverable and execute successfully across multiple packages (database, ledger, etc.), and profiling is initiated as expected. The module re-tidying on each bench run and the state cleanup in test-load-profile behave as noted, with no issues identified.

Comment on lines 1692 to 1799
for b.Loop() {
// Create iterator for each benchmark iteration
iterator, err := immDb.BlocksFromPoint(genesisPoint)
if err != nil {
b.Fatal(err)
}
defer iterator.Close() // Ensure iterator is closed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Avoid defer inside for b.Loop() and double-closing the iterator in BenchmarkChainSyncFromGenesis

In BenchmarkChainSyncFromGenesis:

  • A new iterator is created inside for b.Loop() at Line 1694.
  • You call defer iterator.Close() inside that loop (Line 1698).
  • You also call iterator.Close() explicitly at the end of the iteration (Line 1736).

Issues:

  1. defer inside the hot loop adds per-iteration overhead that becomes part of the benchmark measurement.
  2. Multiple defers all capture the same iterator variable and will all run at the end of the benchmark, effectively double (or N‑times) closing the last iterator.
  3. The explicit iterator.Close() already correctly closes the iterator at the end of each iteration; the defer is redundant.

Simplest fix: drop the defer and rely on the explicit close:

-	for b.Loop() {
-		// Create iterator for each benchmark iteration
-		iterator, err := immDb.BlocksFromPoint(genesisPoint)
-		if err != nil {
-			b.Fatal(err)
-		}
-		defer iterator.Close() // Ensure iterator is closed
+	for b.Loop() {
+		// Create iterator for each benchmark iteration
+		iterator, err := immDb.BlocksFromPoint(genesisPoint)
+		if err != nil {
+			b.Fatal(err)
+		}
@@
-		blocksProcessed++
-		}
-		iterator.Close() // Close iterator after each iteration
-	}
+			blocksProcessed++
+		}
+		iterator.Close() // Close iterator after each iteration
+	}

This keeps the benchmark semantics the same (one iterator per iteration, closed at the end) without incurring extra defer overhead or double-close behavior.

To sanity-check, you can run:

and confirm that iterator Close() is called exactly once per iterator instance and never via defer in the hot path.

Also applies to: 1734-1737


🏁 Script executed:

#!/bin/bash
# Make sure there are no remaining defers in tight loops in this file.
rg -n "for b.Loop\(\)" -n ledger/benchmark_test.go -C3
rg -n "defer iterator.Close" ledger/benchmark_test.go -C2

Length of output: 3454


I can confirm the issue from the script output. Let me verify the explicit close location around line 1736:


🏁 Script executed:

#!/bin/bash
# Get context around line 1736 to confirm explicit iterator.Close()
sed -n '1730,1745p' ledger/benchmark_test.go | cat -n

Length of output: 67


🏁 Script executed:

#!/bin/bash
# Get context around line 1736 to confirm explicit iterator.Close()
sed -n '1730,1745p' ledger/benchmark_test.go | cat -n

Length of output: 311


Remove defer iterator.Close() from inside the loop; rely on the explicit close at line 1736

The issue is confirmed. At line 1698, defer iterator.Close() is inside for b.Loop(), which:

  1. Adds per-iteration overhead that distorts benchmark measurements.
  2. Causes multiple closures to reference the same variable: All defers accumulate and execute at the end of the benchmark function. Due to closure capture semantics, they all reference the final iterator instance, causing it to be closed multiple times.

The explicit iterator.Close() at line 1736 already closes the iterator at the end of each iteration; the defer is redundant and harmful. Remove the defer statement at line 1698 and keep only the explicit close.

🤖 Prompt for AI Agents
In ledger/benchmark_test.go around lines 1692 to 1699, remove the defer
iterator.Close() call inside the for b.Loop() iteration because it adds
per-iteration overhead and creates multiple deferred closures that reference the
final iterator; rely on the existing explicit iterator.Close() at line 1736 to
close the iterator each iteration instead.

@wolf31o2 wolf31o2 force-pushed the test/benchmark-all-the-things branch from 44603d4 to ac72634 Compare November 25, 2025 22:13
@wolf31o2 wolf31o2 requested review from a team as code owners November 25, 2025 22:13
@wolf31o2 wolf31o2 force-pushed the test/benchmark-all-the-things branch from ac72634 to 6ac9cd9 Compare November 25, 2025 22:37
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
cmd/dingo/main.go (1)

165-179: Minor cleanup opportunity: redundant close before exit.

The memory profiling logic is correct, but line 175 explicitly closes the file before os.Exit(1), which is unnecessary since:

  • os.Exit(1) terminates immediately without running defers
  • The OS reclaims file handles on process exit
  • A defer already exists at line 172 (though it won't run due to exit)

You can simplify by removing the explicit close:

 		if err := pprof.WriteHeapProfile(f); err != nil {
 			fmt.Fprintf(os.Stderr, "could not write memory profile: %v\n", err)
-			f.Close() // Explicitly close before exit
 			os.Exit(1)
 		}

Alternatively, if you want guaranteed cleanup, use defer f.Close() and return an error instead of calling os.Exit(1).

benchmark_results.md (1)

6-6: Optional: Address style guide formatting.

Static analysis tools suggest minor formatting improvements:

  1. Line 6: Some style guides recommend a comma after the day in "November 25, 2025" → "November 25**,** 2025"
  2. Line 54: Tables should be surrounded by blank lines (add blank line after table before "## Performance Changes")

These are purely stylistic and the file will be regenerated by generate_benchmarks.sh, so you could update the template in the script if conformance matters.

Also applies to: 54-54

generate_benchmarks.sh (2)

62-95: Optional: Address shellcheck warnings for robustness.

Shellcheck warns about combining declaration and assignment (SC2155) at lines 64-70, which can mask command failures. While the current approach is acceptable for a reporting script (failures result in empty strings), you could improve robustness:

 parse_benchmark() {
     local line="$1"
-    local name=$(echo "$line" | awk '{print $1}' | sed 's/Benchmark//' | sed 's/-128$//')
-    local ops_sec=$(echo "$line" | awk '{print $2}' | sed 's/,//g')
+    local name
+    name=$(echo "$line" | awk '{print $1}' | sed 's/Benchmark//' | sed 's/-128$//')
+    local ops_sec
+    ops_sec=$(echo "$line" | awk '{print $2}' | sed 's/,//g')
     # ... (similar for other variables)

This separates declaration from assignment, allowing you to check return codes if needed.


173-220: MAJOR_CHANGES logic is correct but could be clearer.

The change detection logic:

  1. Line 198: Sets MAJOR_CHANGES=true for slowdowns >10%
  2. Lines 217-219: Also sets MAJOR_CHANGES=true if there are faster/slower/new benchmarks

This means any performance change or new benchmark triggers the flag. The logic is sound, but the flow could be clearer with a comment explaining that line 218 expands the initial slowdown-only criteria.

Consider adding a comment at line 217:

# Report changes if any improvements, regressions, or new benchmarks detected
if [[ ${#faster_benchmarks[@]} -gt 0 || ${#slower_benchmarks[@]} -gt 0 || ${#new_benchmarks[@]} -gt 0 ]]; then
    MAJOR_CHANGES=true
fi
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 44603d4 and 6ac9cd9.

📒 Files selected for processing (10)
  • Makefile (2 hunks)
  • benchmark_results.md (1 hunks)
  • cmd/dingo/main.go (5 hunks)
  • database/benchmark_test.go (1 hunks)
  • database/plugin/blob/aws/commit_timestamp.go (1 hunks)
  • database/plugin/blob/gcs/commit_timestamp.go (1 hunks)
  • database/plugin/metadata/sqlite/plugin.go (0 hunks)
  • database/plugin/metadata/sqlite/transaction.go (1 hunks)
  • generate_benchmarks.sh (1 hunks)
  • ledger/benchmark_test.go (1 hunks)
💤 Files with no reviewable changes (1)
  • database/plugin/metadata/sqlite/plugin.go
🚧 Files skipped from review as they are similar to previous changes (4)
  • database/benchmark_test.go
  • Makefile
  • database/plugin/blob/gcs/commit_timestamp.go
  • database/plugin/blob/aws/commit_timestamp.go
🧰 Additional context used
🧠 Learnings (7)
📚 Learning: 2025-10-22T21:29:26.924Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T21:29:26.924Z
Learning: For tests under ledger/, prefer constructing a minimal ledger Block (MockBlock mirroring chain/chain_test.go) and pass it to Chain.AddBlock, rather than using database/models.Block or relying on models.Block.Decode().

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:19:25.277Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:434-434
Timestamp: 2025-10-22T20:19:25.277Z
Learning: In this repo’s tests, use models.Block.Decode() to obtain a ledger.Block before calling chain.Chain.AddBlock, since AddBlock expects a ledger.Block, while LedgerState.GetBlock returns a database/models.Block.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.164Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:56-56
Timestamp: 2025-10-22T20:13:35.164Z
Learning: In tests under ledger/state_test.go, PrimaryChain.AddBlock expects a "github.com/blinklabs-io/gouroboros/ledger/common".Block; use models.Block.Decode() to obtain that type before calling AddBlock.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T21:28:48.396Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T21:28:48.396Z
Learning: In utxorpc/sync_test.go, tests must not pass database/models.Block into chain.Chain.AddBlock. They should call models.Block.Decode() and pass the resulting github.com/blinklabs-io/gouroboros/ledger/common.Block to AddBlock.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:19:26.360Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:91-91
Timestamp: 2025-10-22T20:19:26.360Z
Learning: In utxorpc/sync_test.go, tests should not construct database/models.Block directly for sync server use. Instead, build a models.Block only as a container for CBOR and call models.Block.Decode() to obtain a github.com/blinklabs-io/gouroboros/ledger/common.Block (or the repository’s ledger.Block interface), and use that decoded value in the test.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:13:35.972Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: utxorpc/sync_test.go:339-339
Timestamp: 2025-10-22T20:13:35.972Z
Learning: Unit tests under utxorpc should assert that models.Block.Decode() returns a github.com/blinklabs-io/gouroboros/ledger/common.Block, not rely on models.Block directly in protocol/ledger-facing logic.

Applied to files:

  • ledger/benchmark_test.go
📚 Learning: 2025-10-22T20:18:45.125Z
Learnt from: wolf31o2
Repo: blinklabs-io/dingo PR: 971
File: ledger/state_test.go:248-248
Timestamp: 2025-10-22T20:18:45.125Z
Learning: In tests, Chain.AddBlock must be given a gouroboros ledger.Block (not database/models.Block). Use a lightweight MockBlock (as in chain/chain_test.go) or call models.Block.Decode() when starting from stored rows.

Applied to files:

  • ledger/benchmark_test.go
🧬 Code graph analysis (1)
ledger/benchmark_test.go (7)
database/immutable/immutable.go (1)
  • ImmutableDb (27-29)
database/database.go (1)
  • Database (45-50)
database/utxo.go (1)
  • ErrUtxoNotFound (28-28)
database/models/block.go (1)
  • ErrBlockNotFound (23-23)
chain/manager.go (2)
  • NewManager (46-61)
  • ChainManager (37-44)
ledger/state.go (2)
  • LedgerStateConfig (58-68)
  • NewLedgerState (104-123)
chain/chain.go (1)
  • Chain (35-49)
🪛 LanguageTool
benchmark_results.md

[style] ~6-~6: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...st Environment - Date: November 25, 2025 - Go Version: - OS: - **Archi...

(MISSING_COMMA_AFTER_YEAR)

🪛 markdownlint-cli2 (0.18.1)
benchmark_results.md

54-54: Tables should be surrounded by blank lines

(MD058, blanks-around-tables)

🪛 Shellcheck (0.11.0)
generate_benchmarks.sh

[warning] 64-64: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 65-65: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 66-66: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 67-67: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 68-68: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 69-69: Declare and assign separately to avoid masking return values.

(SC2155)


[warning] 70-70: Declare and assign separately to avoid masking return values.

(SC2155)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: nilaway
  • GitHub Check: docker (ubuntu-latest, amd64)
  • GitHub Check: Analyze (go)
🔇 Additional comments (7)
database/plugin/metadata/sqlite/transaction.go (1)

108-108: LGTM! Modern Go type alias.

The change from interface{} to any is a clean modernization using the Go 1.18+ type alias with no behavioral impact.

cmd/dingo/main.go (2)

79-102: Manual parsing correctly handles both flag syntaxes.

The custom argument parsing before Cobra setup now properly supports both --flag=value and --flag value forms, addressing the previous review concern. This pre-Cobra parsing is necessary because CPU profiling must start before the main execution path begins.

Note: The flags are registered again with Cobra at lines 140-143. While this creates some redundancy, it maintains consistency with the flag system and ensures help text is correct.


104-121: CPU profiling implementation is correct.

The profiling lifecycle is properly structured:

  • File created and deferred for closure
  • pprof.StartCPUProfile called with error handling
  • pprof.StopCPUProfile deferred to run at program exit
  • User-facing messages provide clear feedback
ledger/benchmark_test.go (4)

1793-1837: LGTM! Iterator handling correctly revised per past review.

The BenchmarkChainSyncFromGenesis benchmark now correctly:

  • Creates a new iterator per iteration inside b.Loop() (line 1795)
  • Processes up to 100 blocks per iteration (lines 1801-1835)
  • Explicitly closes the iterator at the end of each iteration (line 1836)
  • No longer has defer inside the hot loop

This addresses the past review concerns about iterator lifecycle and loop structure.


2042-2117: Concurrent benchmark structure is sound.

The BenchmarkConcurrentQueries correctly uses b.RunParallel to simulate concurrent load. The benchmark:

  • Rotates through different query types per worker
  • Safely ignores errors (acceptable for throughput measurement)
  • Reports meaningful queries/sec metric
  • Avoids shared state mutations

1868-1875: The review comment is incorrect—Logger is not required and will not cause nil pointer dereferences.

The NewLedgerState function in ledger/state.go (lines 111–114) explicitly handles a nil Logger by creating a default logger that discards logs:

if cfg.Logger == nil {
    // Create logger to throw away logs
    // We do this so we don't have to add guards around every log operation
    cfg.Logger = slog.New(slog.NewJSONHandler(io.Discard, nil))
}

This design ensures that passing only Database and ChainManager to LedgerStateConfig (as the benchmark does) is safe and intentional. No additional fields are required.


34-40: Original review comment concern is invalid.

After verifying the ImmutableDb implementation in ./database/immutable/immutable.go (lines 27–48), the struct contains only a single dataDir string field. The New() function merely validates the directory exists with os.Stat() and returns the struct—it opens no file handles or other resources that require explicit cleanup.

Only BlockIterator (which holds chunk file handles) requires a Close() method, which is correctly applied with defer iterator.Close() at line 98 in the benchmark file.

The openImmutableTestDB helper is implemented correctly and requires no resource cleanup mechanism.

Signed-off-by: Chris Gianelloni <wolf31o2@blinklabs.io>
@wolf31o2 wolf31o2 force-pushed the test/benchmark-all-the-things branch from 6ac9cd9 to 3093541 Compare November 25, 2025 23:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants