Skip to content

Conversation

@yuantailing
Copy link
Member

@yuantailing yuantailing commented Nov 25, 2025

Summary by CodeRabbit

  • New Features

    • Interactive HTML dashboard for visualizing kernel latency analysis with synchronized table and charts
    • Profile parsing utility to extract and analyze kernel execution data from profiling reports
    • Enhanced benchmarking workflow with improved configuration and MoE routing options
    • Configurable Slurm container naming support
  • Documentation

    • Updated benchmarking guide with new command examples and Slurm container management instructions
    • Added profile analysis parsing documentation
  • Chores

    • Updated test suite for new profiling and parsing workflows

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
@yuantailing yuantailing requested a review from a team as a code owner November 25, 2025 08:42
@yuantailing
Copy link
Member Author

/bot run

@yuantailing yuantailing enabled auto-merge (squash) November 25, 2025 08:45
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 25, 2025

📝 Walkthrough

Walkthrough

The PR refactors the layer-wise benchmarking infrastructure by introducing new profiling and analysis capabilities. It replaces run_single.py with a configurable run.py orchestration script, adds a parse.py script to extract kernel data from Nsight Systems reports, and includes an interactive HTML dashboard. Routing logic is simplified in DeepSeekV3Runner and generalized via new balanced routing utilities. Documentation is updated with new examples and supporting container-management scripts.

Changes

Cohort / File(s) Summary
Gitignore and profiling directory
.gitignore
Added /examples/layer_wise_benchmarks/profiles/ to Generated files section (two locations).
Documentation and examples
examples/layer_wise_benchmarks/README.md
Relabeled MPI execution examples, replaced run_single.sh with run.sh, updated command invocations and model parameters. Added "Batched run" section with batch configuration examples. Expanded Slurm usage notes with container guidance. Introduced "Parse profiles" section with Python-based parsing instructions.
Core benchmarking scripts
examples/layer_wise_benchmarks/run.py
New orchestration script for layer-wise benchmarking with CLI/YAML config integration, MPI setup, KV cache management, warmup/benchmarking loops, NVTX profiling, and detailed timing statistics.
examples/layer_wise_benchmarks/run.sh
examples/layer_wise_benchmarks/run_single.py
examples/layer_wise_benchmarks/parse.py
Container and SLURM utilities
examples/layer_wise_benchmarks/slurm_init_containers.sh
Made container name configurable via CONTAINER_NAME environment variable with default fallback.
examples/layer_wise_benchmarks/slurm_query_container_name.sh
Interactive dashboard
examples/layer_wise_benchmarks/template.html
New Jinja2-rendered HTML dashboard with echarts integration for kernel latency analysis. Includes hierarchical table with fixed columns, synchronized bar/sunburst charts, and keyboard/click-based interaction.
Runner implementations
tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
Removed routing/balance logic and RoutingMethod class. Simplified gate override workflow. Added has_mamba_metadata() static method. Removed replace_routing_method() method.
tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py
tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py
Module initialization and utilities
tensorrt_llm/tools/layer_wise_benchmarks/__init__.py
Added import and export of mark_ranges function. Updated __all__ to include mark_ranges alongside BalanceMethod and get_runner_cls.
tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py
Tests
tests/unittest/tools/test_layer_wise_benchmarks.py
Updated test invocations to use run.sh instead of run_single.sh. Added PROFILE_DIR environment variable, extended CLI options (--no-enable-attention-dp, --moe-backend, --balance-method), and post-run parse.py invocations for profile analysis.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant run.sh
    participant run.py
    participant Runner
    participant KV Cache Mgr
    participant CUDA/Profiler
    participant parse.py
    participant template.html

    User->>run.sh: Execute benchmark
    run.sh->>run.py: Invoke with config
    run.py->>run.py: Parse YAML + CLI args
    run.py->>KV Cache Mgr: Initialize cache manager
    run.py->>Runner: Create runner instance
    run.py->>CUDA/Profiler: Start profiling (nsys)
    run.py->>Runner: Warmup loop with NVTX annotations
    run.py->>Runner: Main benchmark loop (multiple batch/seq configs)
    Runner->>CUDA/Profiler: CUDA events for timing
    run.py->>CUDA/Profiler: Stop profiling, generate .nsys-rep
    
    Note over run.sh: Output: nsys-rep + log file in PROFILE_DIR
    
    User->>parse.py: Parse profiles
    parse.py->>CUDA/Profiler: nsys export (if cache stale)
    parse.py->>parse.py: Extract kernel events from SQLite
    parse.py->>parse.py: Map kernels, compute timing, normalize
    parse.py->>parse.py: Generate CSV + JSON hierarchical structure
    
    Note over parse.py: Output: CSV + JSON in PROFILE_DIR
    
    User->>template.html: Open dashboard
    template.html->>template.html: Load headerConfig + rawData (JSON)
    template.html->>template.html: Render hierarchical table + charts
    User->>template.html: Click cells / rows for interaction
    template.html->>template.html: Update bar chart + sunburst chart
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Areas requiring extra attention:

  • examples/layer_wise_benchmarks/parse.py — Complex kernel extraction logic with SQLite queries, keyword-based kernel classification, and shortest common supersequence algorithm for hierarchical alignment; requires careful validation of data extraction and normalization steps.
  • examples/layer_wise_benchmarks/template.html — Interactive JavaScript logic for chart synchronization, state management, keyboard/click handlers, and dynamic header rendering; potential for UX edge cases or race conditions in chart updates.
  • tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py — Multiple new balanced routing functions with complex logic for token/expert selection and rank balancing; mock-based patching of fused MoE backends within context managers requires careful verification of restoration and side effects.
  • Interface rename (runner_interface.py and implementations) — Method rename from replace_routing_method() to replace_routing_method_ctx() affects all runner implementations; verify all call sites are updated consistently.
  • examples/layer_wise_benchmarks/run.py — Large new script with YAML config merging, MPI setup, warmup/benchmark orchestration, and detailed timing statistics; ensure error handling and edge cases are properly tested.
  • Routing logic removal in deepseekv3_runner.py — Significant simplification by removing RoutingMethod class; confirm the new simplified approach does not break existing routing workflows or gate customization.

Possibly related PRs

Suggested reviewers

  • kaiyux
  • chzblych
  • lucaslie

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete - it contains only the template structure without filling in the required Description and Test Coverage sections, though the PR Checklist was marked complete. Please complete the Description section explaining what the parser does and why it was added, and the Test Coverage section listing relevant tests (e.g., parse.py invocations in test_layer_wise_benchmarks.py).
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title '[None][feat] Add a parser to layer-wise benchmarks' clearly describes the main feature added - a parser tool for layer-wise benchmarks, which is evident from the new parse.py file and related changes.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (7)
tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py (1)

13-24: Consider adding idempotency check (optional refinement).

The mark_ranges function globally replaces forward methods with nvtx-decorated versions. While appropriate for a profiling tool, calling it multiple times would create nested decorators. Consider adding a guard to make it idempotent if there's any risk of multiple invocations.

For example:

 def mark_ranges():
+    # Check if already marked to avoid double-wrapping
+    if hasattr(DeepseekV3Gate.forward, '_nvtx_marked'):
+        return
     DeepseekV3Gate.forward = nvtx.annotate("DeepseekV3Gate")(DeepseekV3Gate.forward)
+    DeepseekV3Gate.forward._nvtx_marked = True
     # ... repeat for other classes

However, this is only necessary if mark_ranges might be called multiple times in the same process.

examples/layer_wise_benchmarks/parse.py (2)

427-431: Enable autoescape or use safe JSON rendering.

The Jinja2 environment is created without autoescape, which could lead to XSS if any data contains HTML special characters. Since the template embeds data directly into JavaScript, consider using the tojson filter or json.dumps with proper escaping.

-js_header_config = [{"name": problem["text"]} for problem in problem_set]
-loader = jinja2.FileSystemLoader(Path(__file__).parent)
-template = jinja2.Environment(loader=loader).get_template("template.html")
-with html_file_path.open("w") as f:
-    f.write(template.render(headerConfig=js_header_config, rawData=js_data))
+js_header_config = [{"name": problem["text"]} for problem in problem_set]
+loader = jinja2.FileSystemLoader(Path(__file__).parent)
+env = jinja2.Environment(loader=loader, autoescape=jinja2.select_autoescape())
+template = env.get_template("template.html")
+with html_file_path.open("w") as f:
+    f.write(
+        template.render(
+            headerConfig=jinja2.Markup(json.dumps(js_header_config)),
+            rawData=jinja2.Markup(json.dumps(js_data)),
+        )
+    )

Alternatively, update template.html to use {{ headerConfig | tojson }} and {{ rawData | tojson }}.


231-235: Add descriptive assertion message for debugging.

If kernel sequences differ between runs, the bare assertion provides no context for debugging.

 for problem_id in range(len(kernels)):
     required_seq = [demangledName for demangledName, _, _, _ in kernels[problem_id][0]]
     for run_id in range(len(kernels[problem_id])):
         seq = [demangledName for demangledName, _, _, _ in kernels[problem_id][run_id]]
-        assert seq == required_seq
+        assert seq == required_seq, (
+            f"Kernel sequence mismatch in problem {problem_id}, run {run_id}: "
+            f"expected {len(required_seq)} kernels, got {len(seq)}"
+        )
examples/layer_wise_benchmarks/run.py (1)

169-172: Avoid list comprehension for side effects.

Using a list comprehension purely for side effects is not idiomatic and allocates an unnecessary list.

 events = [
     torch.cuda.Event(enable_timing=True) for _ in range(args.warmup_times + args.run_times + 1)
 ]
-[e.record() for e in events]  # Explicitly warmup events because torch is lazy
+for e in events:  # Explicitly warmup events because torch is lazy
+    e.record()
examples/layer_wise_benchmarks/template.html (1)

693-726: Remove debug console.log statement.

The console.log(maxDepth) at line 706 appears to be a debug statement that should be removed.

         for (let node of rawData) {
             maxDepth = Math.max(maxDepth, getDepth(node));
         }
-        console.log(maxDepth);
 
         const container = document.getElementById('level-buttons');
tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (2)

40-59: Fix typo: "reciever_rank" → "receiver_rank".

Minor spelling error in the loop variable name.

     # Second, each receiver selects target expert
     target_expert = torch.empty_like(target_rank)
-    for reciever_rank in range(world_size):
-        mask = target_rank == reciever_rank
+    for receiver_rank in range(world_size):
+        mask = target_rank == receiver_rank
         experts_per_rank = num_experts // world_size
         local_expert = torch.arange(num_tokens * top_k) % experts_per_rank
-        target_expert[mask] = (reciever_rank * experts_per_rank) + local_expert
+        target_expert[mask] = (receiver_rank * experts_per_rank) + local_expert

231-244: Prefix unused cls parameter with underscore.

The cls parameter is unused but required for interface compatibility. Prefix with underscore to indicate intentional non-use.

         def make_select_alltoall_method_type(select_alltoall_method_type_orig):
             def select_alltoall_method_type(
-                cls: type, mapping: Mapping, top_k: int, *args, **kwargs
+                _cls: type, mapping: Mapping, top_k: int, *args, **kwargs
             ):
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4742c13 and 0663598.

📒 Files selected for processing (15)
  • .gitignore (1 hunks)
  • examples/layer_wise_benchmarks/README.md (3 hunks)
  • examples/layer_wise_benchmarks/parse.py (1 hunks)
  • examples/layer_wise_benchmarks/run.py (1 hunks)
  • examples/layer_wise_benchmarks/run.sh (2 hunks)
  • examples/layer_wise_benchmarks/run_single.py (0 hunks)
  • examples/layer_wise_benchmarks/slurm_init_containers.sh (2 hunks)
  • examples/layer_wise_benchmarks/slurm_query_container_name.sh (1 hunks)
  • examples/layer_wise_benchmarks/template.html (1 hunks)
  • tensorrt_llm/tools/layer_wise_benchmarks/__init__.py (1 hunks)
  • tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py (1 hunks)
  • tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py (1 hunks)
  • tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py (1 hunks)
  • tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (5 hunks)
  • tests/unittest/tools/test_layer_wise_benchmarks.py (8 hunks)
💤 Files with no reviewable changes (1)
  • examples/layer_wise_benchmarks/run_single.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tensorrt_llm/tools/layer_wise_benchmarks/__init__.py
  • examples/layer_wise_benchmarks/parse.py
  • examples/layer_wise_benchmarks/run.py
  • tests/unittest/tools/test_layer_wise_benchmarks.py
  • tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py
  • tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py
  • tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py
  • tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tensorrt_llm/tools/layer_wise_benchmarks/__init__.py
  • examples/layer_wise_benchmarks/parse.py
  • examples/layer_wise_benchmarks/run.py
  • tests/unittest/tools/test_layer_wise_benchmarks.py
  • tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py
  • tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py
  • tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py
  • tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
🧠 Learnings (7)
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.

Applied to files:

  • .gitignore
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • .gitignore
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • examples/layer_wise_benchmarks/run.py
  • tests/unittest/tools/test_layer_wise_benchmarks.py
📚 Learning: 2025-08-21T21:48:35.135Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:399-417
Timestamp: 2025-08-21T21:48:35.135Z
Learning: CUTLASS extensions in TensorRT-LLM (located under cpp/tensorrt_llm/cutlass_extensions/) are designed to integrate with and extend functionality in the external CUTLASS repository. When analyzing these extensions, their consumers and functionality wiring may exist in the CUTLASS codebase rather than within TensorRT-LLM itself.

Applied to files:

  • tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.

Applied to files:

  • tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.

Applied to files:

  • tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
📚 Learning: 2025-10-20T17:07:18.745Z
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py:98-116
Timestamp: 2025-10-20T17:07:18.745Z
Learning: In NemotronH models (tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py), the gate (self.gate) returns topk_indices and topk_weights that are already in the correct shape to be passed directly to torch_ops.auto_deploy.torch_moe without needing to reshape them when hidden_states is flattened.

Applied to files:

  • tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
🧬 Code graph analysis (4)
tensorrt_llm/tools/layer_wise_benchmarks/__init__.py (1)
tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py (1)
  • mark_ranges (13-24)
examples/layer_wise_benchmarks/run.py (5)
tensorrt_llm/_torch/modules/multi_stream_utils.py (1)
  • with_multi_stream (26-32)
tensorrt_llm/_utils.py (2)
  • local_mpi_rank (560-561)
  • mpi_world_size (556-557)
tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py (3)
  • BalanceMethod (10-14)
  • create_mapping (48-49)
  • create_kv_cache_manager (36-44)
tensorrt_llm/tools/layer_wise_benchmarks/runner_factory.py (1)
  • get_runner_cls (7-13)
tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (2)
  • create_mapping (522-539)
  • create_kv_cache_manager (430-519)
tests/unittest/tools/test_layer_wise_benchmarks.py (1)
tests/integration/defs/trt_test_alternative.py (1)
  • check_call (250-258)
tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (1)
tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py (2)
  • BalanceMethod (10-14)
  • replace_routing_method_ctx (31-32)
🪛 markdownlint-cli2 (0.18.1)
examples/layer_wise_benchmarks/README.md

124-124: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.5)
examples/layer_wise_benchmarks/parse.py

35-35: subprocess call: check for execution of untrusted input

(S603)


36-45: Starting a process with a partial executable path

(S607)


161-168: Possible SQL injection vector through string-based query construction

(S608)


211-211: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)


224-224: Loop control variable runtime_start not used within loop body

(B007)


225-225: Loop control variable runtime_end not used within loop body

(B007)


226-226: Loop control variable capture_start not used within loop body

(B007)


227-227: Loop control variable capture_end not used within loop body

(B007)


391-391: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)


429-429: By default, jinja2 sets autoescape to False. Consider using autoescape=True or the select_autoescape function to mitigate XSS vulnerabilities.

(S701)

examples/layer_wise_benchmarks/run.py

75-75: Avoid specifying long messages outside the exception class

(TRY003)


85-85: Avoid specifying long messages outside the exception class

(TRY003)


220-220: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)


220-220: Prefer itertools.pairwise() over zip() when iterating over successive pairs

Replace zip() with itertools.pairwise()

(RUF007)

tests/unittest/tools/test_layer_wise_benchmarks.py

15-15: subprocess call: check for execution of untrusted input

(S603)


38-38: subprocess call: check for execution of untrusted input

(S603)


65-65: subprocess call: check for execution of untrusted input

(S603)


90-90: subprocess call: check for execution of untrusted input

(S603)


108-108: subprocess call: check for execution of untrusted input

(S603)


109-109: Starting a process with a partial executable path

(S607)


120-120: subprocess call: check for execution of untrusted input

(S603)


140-140: subprocess call: check for execution of untrusted input

(S603)


141-141: Starting a process with a partial executable path

(S607)

tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py

80-80: Avoid specifying long messages outside the exception class

(TRY003)


88-88: Avoid specifying long messages outside the exception class

(TRY003)


93-93: Avoid specifying long messages outside the exception class

(TRY003)


123-125: Avoid specifying long messages outside the exception class

(TRY003)


233-233: Unused function argument: cls

(ARG001)


415-415: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)


426-426: zip() without an explicit strict= parameter

Add explicit value for parameter strict=

(B905)

🪛 Shellcheck (0.11.0)
examples/layer_wise_benchmarks/slurm_query_container_name.sh

[error] 16-16: Since you double quoted this, it will not word split, and the loop will only run once.

(SC2066)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (21)
.gitignore (1)

76-76: Profile outputs correctly excluded from version control.

The addition of /examples/layer_wise_benchmarks/profiles/ to the "Generated files" section is appropriate and aligns with the PR's introduction of new profiling infrastructure. Generated profiling data should not be committed to the repository.

examples/layer_wise_benchmarks/slurm_init_containers.sh (1)

6-6: LGTM! Good practice making the container name configurable.

The introduction of a configurable CONTAINER_NAME environment variable with a sensible default improves flexibility without breaking existing usage.

Also applies to: 50-50

tensorrt_llm/tools/layer_wise_benchmarks/__init__.py (1)

1-5: LGTM! Correctly exports the new profiling utility.

The mark_ranges function is properly imported and exposed in the public API, following Python packaging best practices.

tests/unittest/tools/test_layer_wise_benchmarks.py (2)

14-29: LGTM! Test correctly updated to use the new run.sh interface.

The test now uses a configurable profile_dir and the updated run.sh entrypoint, properly passing the PROFILE_DIR environment variable.


108-111: Review comment is incorrect; parse.py invocation pattern is intentional.

The parse.py step is deliberately invoked only in tests using config_gen.yaml (generation phase profiling) and not in tests using config_ctx.yaml (context phase profiling). This reflects a design choice where generation profiling requires post-processing via parse.py, while context profiling does not. The pattern is consistent and correct.

Likely an incorrect or invalid review comment.

examples/layer_wise_benchmarks/run.sh (1)

16-17: LGTM! Improved configurability and logging.

The script now uses a configurable PROFILE_DIR with a sensible default and properly logs output to both console and file. The transition from run_single.py to run.py aligns with the PR's orchestration improvements.

Also applies to: 27-27, 38-40

examples/layer_wise_benchmarks/README.md (1)

1-146: LGTM! Comprehensive documentation of the new workflow.

The README thoroughly documents the transition from run_single.sh to run.sh, the new batched execution capabilities, profile parsing with parse.py, and updated Slurm integration. The examples clearly demonstrate the various configuration options.

tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py (1)

31-32: All implementations of the renamed abstract method have been properly updated.

Verification confirms:

  • Abstract method (runner_interface.py:31): replace_routing_method_ctx defined with correct signature
  • All RunnerBase implementations properly updated:
    • DeepSeekV3Runner inherits implementation from RunnerMixin
    • Qwen3NextRunner inherits implementation from RunnerMixin
  • Implementation (runner_utils.py:391): RunnerMixin provides the method body with matching signature
  • No orphaned references: Zero occurrences of old method name replace_routing_method anywhere in codebase
  • Usage sites updated: Examples in run.py (lines 157, 197) use the new method name
tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py (1)

16-96: LGTM!

The simplified DeepSeekV3Runner class implementation is clean. The removal of complex routing/balance logic in favor of the context-manager approach (replace_routing_method_ctx from RunnerMixin) improves maintainability.

examples/layer_wise_benchmarks/parse.py (3)

30-46: LGTM!

The lazy_convert_sqlite function correctly implements caching behavior based on file modification times. The subprocess.check_call usage with explicit argument list (not shell=True) is the secure approach.


151-168: LGTM!

The dynamic SQL query construction is safe here. The table names (CUPTI_ACTIVITY_KIND_MEMCPY, CUPTI_ACTIVITY_KIND_MEMSET) are hardcoded constants, not user input. The conditional addition based on schema introspection is a valid pattern for handling optional tables in Nsight Systems exports.


346-364: LGTM!

The overlap and space time calculations correctly measure parallel kernel execution. The negative overlap value (line 359) appropriately represents time saved through parallelism, while space represents idle time between kernels.

examples/layer_wise_benchmarks/run.py (3)

175-188: LGTM!

The NVTX annotation pattern with pass is intentional per the comment to avoid clutter in the Nsight Systems UI. The problem spec annotations provide useful context for parsing in parse.py.


218-234: LGTM!

The timing statistics calculation is correct. The zip(events, events[1:]) pattern is appropriate for Python 3.8+ compatibility (itertools.pairwise requires 3.10+).


130-167: LGTM!

The warmup logic correctly handles autotuning on the first configuration (max batch/seq_len) followed by warmup iterations for all problem configurations. The assertions at lines 147-148 provide good early validation.

examples/layer_wise_benchmarks/template.html (3)

1-11: LGTM - Good security practice with SRI.

The echarts CDN script includes Subresource Integrity (SRI) hash and crossorigin attribute, which is a good security practice to ensure the script hasn't been tampered with.


281-310: LGTM!

The processData function correctly aggregates timing data through the node hierarchy with a clean recursive approach. The totalNode creation provides a proper data structure for the Total row.


607-679: LGTM - Good accessibility support.

The keyboard navigation implementation is comprehensive, supporting arrow keys for navigation and +/- for expand/collapse. The handling of edge cases (total row, boundaries) and smooth scrolling improves user experience.

tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (3)

62-63: LGTM - Caching is appropriate for benchmarking.

The functools.cache usage is appropriate here since the balanced selection functions are deterministic and called repeatedly with the same parameters during benchmarking runs.


390-427: LGTM - Well-structured context manager.

The replace_routing_method_ctx correctly implements the save/restore pattern with proper cleanup in the finally block. The validation at lines 392-412 provides clear error messages for unsupported configurations.


65-93: LGTM - Comprehensive test coverage.

The test_get_balanced_selection function provides thorough validation of the balanced selection algorithm, checking for duplicate experts, per-rank balance, and global expert balance across various parameter combinations.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25707 [ run ] triggered by Bot. Commit: 0663598

Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
@yuantailing
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25715 [ run ] triggered by Bot. Commit: 08b9f59

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25707 [ run ] completed with state ABORTED. Commit: 0663598
LLM/main/L0_MergeRequest_PR #19489 (Blue Ocean) completed with status: ABORTED

Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
@yuantailing
Copy link
Member Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25723 [ run ] triggered by Bot. Commit: 7e143ac

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25715 [ run ] completed with state ABORTED. Commit: 08b9f59
LLM/main/L0_MergeRequest_PR #19497 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #25723 [ run ] completed with state SUCCESS. Commit: 7e143ac
/LLM/main/L0_MergeRequest_PR pipeline #19505 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@yuantailing yuantailing merged commit 51ef037 into NVIDIA:main Nov 25, 2025
5 checks passed
@yuantailing yuantailing deleted the layer_wise_benchmarks branch November 27, 2025 05:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants