-
Notifications
You must be signed in to change notification settings - Fork 1.9k
[None][feat] Add a parser to layer-wise benchmarks #9440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
|
/bot run |
📝 WalkthroughWalkthroughThe PR refactors the layer-wise benchmarking infrastructure by introducing new profiling and analysis capabilities. It replaces Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant run.sh
participant run.py
participant Runner
participant KV Cache Mgr
participant CUDA/Profiler
participant parse.py
participant template.html
User->>run.sh: Execute benchmark
run.sh->>run.py: Invoke with config
run.py->>run.py: Parse YAML + CLI args
run.py->>KV Cache Mgr: Initialize cache manager
run.py->>Runner: Create runner instance
run.py->>CUDA/Profiler: Start profiling (nsys)
run.py->>Runner: Warmup loop with NVTX annotations
run.py->>Runner: Main benchmark loop (multiple batch/seq configs)
Runner->>CUDA/Profiler: CUDA events for timing
run.py->>CUDA/Profiler: Stop profiling, generate .nsys-rep
Note over run.sh: Output: nsys-rep + log file in PROFILE_DIR
User->>parse.py: Parse profiles
parse.py->>CUDA/Profiler: nsys export (if cache stale)
parse.py->>parse.py: Extract kernel events from SQLite
parse.py->>parse.py: Map kernels, compute timing, normalize
parse.py->>parse.py: Generate CSV + JSON hierarchical structure
Note over parse.py: Output: CSV + JSON in PROFILE_DIR
User->>template.html: Open dashboard
template.html->>template.html: Load headerConfig + rawData (JSON)
template.html->>template.html: Render hierarchical table + charts
User->>template.html: Click cells / rows for interaction
template.html->>template.html: Update bar chart + sunburst chart
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Areas requiring extra attention:
Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (7)
tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py (1)
13-24: Consider adding idempotency check (optional refinement).The
mark_rangesfunction globally replaces forward methods with nvtx-decorated versions. While appropriate for a profiling tool, calling it multiple times would create nested decorators. Consider adding a guard to make it idempotent if there's any risk of multiple invocations.For example:
def mark_ranges(): + # Check if already marked to avoid double-wrapping + if hasattr(DeepseekV3Gate.forward, '_nvtx_marked'): + return DeepseekV3Gate.forward = nvtx.annotate("DeepseekV3Gate")(DeepseekV3Gate.forward) + DeepseekV3Gate.forward._nvtx_marked = True # ... repeat for other classesHowever, this is only necessary if
mark_rangesmight be called multiple times in the same process.examples/layer_wise_benchmarks/parse.py (2)
427-431: Enable autoescape or use safe JSON rendering.The Jinja2 environment is created without
autoescape, which could lead to XSS if any data contains HTML special characters. Since the template embeds data directly into JavaScript, consider using thetojsonfilter orjson.dumpswith proper escaping.-js_header_config = [{"name": problem["text"]} for problem in problem_set] -loader = jinja2.FileSystemLoader(Path(__file__).parent) -template = jinja2.Environment(loader=loader).get_template("template.html") -with html_file_path.open("w") as f: - f.write(template.render(headerConfig=js_header_config, rawData=js_data)) +js_header_config = [{"name": problem["text"]} for problem in problem_set] +loader = jinja2.FileSystemLoader(Path(__file__).parent) +env = jinja2.Environment(loader=loader, autoescape=jinja2.select_autoescape()) +template = env.get_template("template.html") +with html_file_path.open("w") as f: + f.write( + template.render( + headerConfig=jinja2.Markup(json.dumps(js_header_config)), + rawData=jinja2.Markup(json.dumps(js_data)), + ) + )Alternatively, update
template.htmlto use{{ headerConfig | tojson }}and{{ rawData | tojson }}.
231-235: Add descriptive assertion message for debugging.If kernel sequences differ between runs, the bare assertion provides no context for debugging.
for problem_id in range(len(kernels)): required_seq = [demangledName for demangledName, _, _, _ in kernels[problem_id][0]] for run_id in range(len(kernels[problem_id])): seq = [demangledName for demangledName, _, _, _ in kernels[problem_id][run_id]] - assert seq == required_seq + assert seq == required_seq, ( + f"Kernel sequence mismatch in problem {problem_id}, run {run_id}: " + f"expected {len(required_seq)} kernels, got {len(seq)}" + )examples/layer_wise_benchmarks/run.py (1)
169-172: Avoid list comprehension for side effects.Using a list comprehension purely for side effects is not idiomatic and allocates an unnecessary list.
events = [ torch.cuda.Event(enable_timing=True) for _ in range(args.warmup_times + args.run_times + 1) ] -[e.record() for e in events] # Explicitly warmup events because torch is lazy +for e in events: # Explicitly warmup events because torch is lazy + e.record()examples/layer_wise_benchmarks/template.html (1)
693-726: Remove debug console.log statement.The
console.log(maxDepth)at line 706 appears to be a debug statement that should be removed.for (let node of rawData) { maxDepth = Math.max(maxDepth, getDepth(node)); } - console.log(maxDepth); const container = document.getElementById('level-buttons');tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (2)
40-59: Fix typo: "reciever_rank" → "receiver_rank".Minor spelling error in the loop variable name.
# Second, each receiver selects target expert target_expert = torch.empty_like(target_rank) - for reciever_rank in range(world_size): - mask = target_rank == reciever_rank + for receiver_rank in range(world_size): + mask = target_rank == receiver_rank experts_per_rank = num_experts // world_size local_expert = torch.arange(num_tokens * top_k) % experts_per_rank - target_expert[mask] = (reciever_rank * experts_per_rank) + local_expert + target_expert[mask] = (receiver_rank * experts_per_rank) + local_expert
231-244: Prefix unusedclsparameter with underscore.The
clsparameter is unused but required for interface compatibility. Prefix with underscore to indicate intentional non-use.def make_select_alltoall_method_type(select_alltoall_method_type_orig): def select_alltoall_method_type( - cls: type, mapping: Mapping, top_k: int, *args, **kwargs + _cls: type, mapping: Mapping, top_k: int, *args, **kwargs ):
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.gitignore(1 hunks)examples/layer_wise_benchmarks/README.md(3 hunks)examples/layer_wise_benchmarks/parse.py(1 hunks)examples/layer_wise_benchmarks/run.py(1 hunks)examples/layer_wise_benchmarks/run.sh(2 hunks)examples/layer_wise_benchmarks/run_single.py(0 hunks)examples/layer_wise_benchmarks/slurm_init_containers.sh(2 hunks)examples/layer_wise_benchmarks/slurm_query_container_name.sh(1 hunks)examples/layer_wise_benchmarks/template.html(1 hunks)tensorrt_llm/tools/layer_wise_benchmarks/__init__.py(1 hunks)tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py(1 hunks)tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py(1 hunks)tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py(1 hunks)tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py(5 hunks)tests/unittest/tools/test_layer_wise_benchmarks.py(8 hunks)
💤 Files with no reviewable changes (1)
- examples/layer_wise_benchmarks/run_single.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., usefrom package.subpackage import fooand thenfoo.SomeClass()instead offrom package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g.,some_file.py)
Python class names should use PascalCase (e.g.,class SomeClass)
Python function and method names should use snake_case (e.g.,def my_awesome_function():)
Python local variable names should use snake_case, with prefixkfor variable names that start with a number (e.g.,k_99th_percentile = ...)
Python global variables should use upper snake_case with prefixG(e.g.,G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g.,MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g.,self.x = 5followed by"""<type>: Description of 'x'""")
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic
Files:
tensorrt_llm/tools/layer_wise_benchmarks/__init__.pyexamples/layer_wise_benchmarks/parse.pyexamples/layer_wise_benchmarks/run.pytests/unittest/tools/test_layer_wise_benchmarks.pytensorrt_llm/tools/layer_wise_benchmarks/mark_utils.pytensorrt_llm/tools/layer_wise_benchmarks/runner_utils.pytensorrt_llm/tools/layer_wise_benchmarks/runner_interface.pytensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
**/*.{cpp,h,cu,py}
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top
Files:
tensorrt_llm/tools/layer_wise_benchmarks/__init__.pyexamples/layer_wise_benchmarks/parse.pyexamples/layer_wise_benchmarks/run.pytests/unittest/tools/test_layer_wise_benchmarks.pytensorrt_llm/tools/layer_wise_benchmarks/mark_utils.pytensorrt_llm/tools/layer_wise_benchmarks/runner_utils.pytensorrt_llm/tools/layer_wise_benchmarks/runner_interface.pytensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
🧠 Learnings (7)
📚 Learning: 2025-08-06T13:58:07.506Z
Learnt from: galagam
Repo: NVIDIA/TensorRT-LLM PR: 6487
File: tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:1-12
Timestamp: 2025-08-06T13:58:07.506Z
Learning: In TensorRT-LLM, test files (files under tests/ directories) do not require NVIDIA copyright headers, unlike production source code files. Test files typically start directly with imports, docstrings, or code.
Applied to files:
.gitignore
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.
Applied to files:
.gitignore
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
examples/layer_wise_benchmarks/run.pytests/unittest/tools/test_layer_wise_benchmarks.py
📚 Learning: 2025-08-21T21:48:35.135Z
Learnt from: djns99
Repo: NVIDIA/TensorRT-LLM PR: 7104
File: cpp/tensorrt_llm/cutlass_extensions/include/cutlass_extensions/epilogue/fusion/sm90_visitor_scatter.hpp:399-417
Timestamp: 2025-08-21T21:48:35.135Z
Learning: CUTLASS extensions in TensorRT-LLM (located under cpp/tensorrt_llm/cutlass_extensions/) are designed to integrate with and extend functionality in the external CUTLASS repository. When analyzing these extensions, their consumers and functionality wiring may exist in the CUTLASS codebase rather than within TensorRT-LLM itself.
Applied to files:
tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py
📚 Learning: 2025-08-14T06:36:40.701Z
Learnt from: timlee0212
Repo: NVIDIA/TensorRT-LLM PR: 6886
File: tensorrt_llm/_torch/models/modeling_deepseekv3.py:0-0
Timestamp: 2025-08-14T06:36:40.701Z
Learning: In DeepSeek V3 model (tensorrt_llm/_torch/models/modeling_deepseekv3.py), the disagreement between AllReduce.__init__ guard and _compute_mlp_tp_size logic for MNNVL usage is expected by design. The AllReduce component and MLP TP-size computation intentionally use different criteria for MNNVL availability decisions.
Applied to files:
tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
📚 Learning: 2025-09-19T21:28:13.751Z
Learnt from: jhaotingc
Repo: NVIDIA/TensorRT-LLM PR: 7856
File: cpp/tensorrt_llm/thop/fp8BlockScaleMoe.cpp:159-166
Timestamp: 2025-09-19T21:28:13.751Z
Learning: In TensorRT-LLM blockScaleMoe routing (cpp/tensorrt_llm/kernels/trtllmGenKernels/blockScaleMoe/runner.cu), the DeepSeek routing method performs reinterpret_cast<float*>(routingLogits) at line 89, which could cause issues if routing_logits are BF16. However, Qwen3-FP8 models use RenormalizeNaive routing method and are not affected by this dtype casting issue.
Applied to files:
tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
📚 Learning: 2025-10-20T17:07:18.745Z
Learnt from: nvchenghaoz
Repo: NVIDIA/TensorRT-LLM PR: 8469
File: tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py:98-116
Timestamp: 2025-10-20T17:07:18.745Z
Learning: In NemotronH models (tensorrt_llm/_torch/auto_deploy/models/patches/nemotron_h.py), the gate (self.gate) returns topk_indices and topk_weights that are already in the correct shape to be passed directly to torch_ops.auto_deploy.torch_moe without needing to reshape them when hidden_states is flattened.
Applied to files:
tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py
🧬 Code graph analysis (4)
tensorrt_llm/tools/layer_wise_benchmarks/__init__.py (1)
tensorrt_llm/tools/layer_wise_benchmarks/mark_utils.py (1)
mark_ranges(13-24)
examples/layer_wise_benchmarks/run.py (5)
tensorrt_llm/_torch/modules/multi_stream_utils.py (1)
with_multi_stream(26-32)tensorrt_llm/_utils.py (2)
local_mpi_rank(560-561)mpi_world_size(556-557)tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py (3)
BalanceMethod(10-14)create_mapping(48-49)create_kv_cache_manager(36-44)tensorrt_llm/tools/layer_wise_benchmarks/runner_factory.py (1)
get_runner_cls(7-13)tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (2)
create_mapping(522-539)create_kv_cache_manager(430-519)
tests/unittest/tools/test_layer_wise_benchmarks.py (1)
tests/integration/defs/trt_test_alternative.py (1)
check_call(250-258)
tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (1)
tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py (2)
BalanceMethod(10-14)replace_routing_method_ctx(31-32)
🪛 markdownlint-cli2 (0.18.1)
examples/layer_wise_benchmarks/README.md
124-124: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 Ruff (0.14.5)
examples/layer_wise_benchmarks/parse.py
35-35: subprocess call: check for execution of untrusted input
(S603)
36-45: Starting a process with a partial executable path
(S607)
161-168: Possible SQL injection vector through string-based query construction
(S608)
211-211: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
224-224: Loop control variable runtime_start not used within loop body
(B007)
225-225: Loop control variable runtime_end not used within loop body
(B007)
226-226: Loop control variable capture_start not used within loop body
(B007)
227-227: Loop control variable capture_end not used within loop body
(B007)
391-391: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
429-429: By default, jinja2 sets autoescape to False. Consider using autoescape=True or the select_autoescape function to mitigate XSS vulnerabilities.
(S701)
examples/layer_wise_benchmarks/run.py
75-75: Avoid specifying long messages outside the exception class
(TRY003)
85-85: Avoid specifying long messages outside the exception class
(TRY003)
220-220: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
220-220: Prefer itertools.pairwise() over zip() when iterating over successive pairs
Replace zip() with itertools.pairwise()
(RUF007)
tests/unittest/tools/test_layer_wise_benchmarks.py
15-15: subprocess call: check for execution of untrusted input
(S603)
38-38: subprocess call: check for execution of untrusted input
(S603)
65-65: subprocess call: check for execution of untrusted input
(S603)
90-90: subprocess call: check for execution of untrusted input
(S603)
108-108: subprocess call: check for execution of untrusted input
(S603)
109-109: Starting a process with a partial executable path
(S607)
120-120: subprocess call: check for execution of untrusted input
(S603)
140-140: subprocess call: check for execution of untrusted input
(S603)
141-141: Starting a process with a partial executable path
(S607)
tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py
80-80: Avoid specifying long messages outside the exception class
(TRY003)
88-88: Avoid specifying long messages outside the exception class
(TRY003)
93-93: Avoid specifying long messages outside the exception class
(TRY003)
123-125: Avoid specifying long messages outside the exception class
(TRY003)
233-233: Unused function argument: cls
(ARG001)
415-415: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
426-426: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
🪛 Shellcheck (0.11.0)
examples/layer_wise_benchmarks/slurm_query_container_name.sh
[error] 16-16: Since you double quoted this, it will not word split, and the loop will only run once.
(SC2066)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (21)
.gitignore (1)
76-76: Profile outputs correctly excluded from version control.The addition of
/examples/layer_wise_benchmarks/profiles/to the "Generated files" section is appropriate and aligns with the PR's introduction of new profiling infrastructure. Generated profiling data should not be committed to the repository.examples/layer_wise_benchmarks/slurm_init_containers.sh (1)
6-6: LGTM! Good practice making the container name configurable.The introduction of a configurable
CONTAINER_NAMEenvironment variable with a sensible default improves flexibility without breaking existing usage.Also applies to: 50-50
tensorrt_llm/tools/layer_wise_benchmarks/__init__.py (1)
1-5: LGTM! Correctly exports the new profiling utility.The
mark_rangesfunction is properly imported and exposed in the public API, following Python packaging best practices.tests/unittest/tools/test_layer_wise_benchmarks.py (2)
14-29: LGTM! Test correctly updated to use the new run.sh interface.The test now uses a configurable
profile_dirand the updatedrun.shentrypoint, properly passing thePROFILE_DIRenvironment variable.
108-111: Review comment is incorrect; parse.py invocation pattern is intentional.The parse.py step is deliberately invoked only in tests using
config_gen.yaml(generation phase profiling) and not in tests usingconfig_ctx.yaml(context phase profiling). This reflects a design choice where generation profiling requires post-processing via parse.py, while context profiling does not. The pattern is consistent and correct.Likely an incorrect or invalid review comment.
examples/layer_wise_benchmarks/run.sh (1)
16-17: LGTM! Improved configurability and logging.The script now uses a configurable
PROFILE_DIRwith a sensible default and properly logs output to both console and file. The transition fromrun_single.pytorun.pyaligns with the PR's orchestration improvements.Also applies to: 27-27, 38-40
examples/layer_wise_benchmarks/README.md (1)
1-146: LGTM! Comprehensive documentation of the new workflow.The README thoroughly documents the transition from
run_single.shtorun.sh, the new batched execution capabilities, profile parsing withparse.py, and updated Slurm integration. The examples clearly demonstrate the various configuration options.tensorrt_llm/tools/layer_wise_benchmarks/runner_interface.py (1)
31-32: All implementations of the renamed abstract method have been properly updated.Verification confirms:
- Abstract method (runner_interface.py:31):
replace_routing_method_ctxdefined with correct signature- All RunnerBase implementations properly updated:
- DeepSeekV3Runner inherits implementation from RunnerMixin
- Qwen3NextRunner inherits implementation from RunnerMixin
- Implementation (runner_utils.py:391): RunnerMixin provides the method body with matching signature
- No orphaned references: Zero occurrences of old method name
replace_routing_methodanywhere in codebase- Usage sites updated: Examples in run.py (lines 157, 197) use the new method name
tensorrt_llm/tools/layer_wise_benchmarks/deepseekv3_runner.py (1)
16-96: LGTM!The simplified
DeepSeekV3Runnerclass implementation is clean. The removal of complex routing/balance logic in favor of the context-manager approach (replace_routing_method_ctxfromRunnerMixin) improves maintainability.examples/layer_wise_benchmarks/parse.py (3)
30-46: LGTM!The
lazy_convert_sqlitefunction correctly implements caching behavior based on file modification times. Thesubprocess.check_callusage with explicit argument list (not shell=True) is the secure approach.
151-168: LGTM!The dynamic SQL query construction is safe here. The table names (
CUPTI_ACTIVITY_KIND_MEMCPY,CUPTI_ACTIVITY_KIND_MEMSET) are hardcoded constants, not user input. The conditional addition based on schema introspection is a valid pattern for handling optional tables in Nsight Systems exports.
346-364: LGTM!The overlap and space time calculations correctly measure parallel kernel execution. The negative overlap value (line 359) appropriately represents time saved through parallelism, while space represents idle time between kernels.
examples/layer_wise_benchmarks/run.py (3)
175-188: LGTM!The NVTX annotation pattern with
passis intentional per the comment to avoid clutter in the Nsight Systems UI. The problem spec annotations provide useful context for parsing inparse.py.
218-234: LGTM!The timing statistics calculation is correct. The
zip(events, events[1:])pattern is appropriate for Python 3.8+ compatibility (itertools.pairwise requires 3.10+).
130-167: LGTM!The warmup logic correctly handles autotuning on the first configuration (max batch/seq_len) followed by warmup iterations for all problem configurations. The assertions at lines 147-148 provide good early validation.
examples/layer_wise_benchmarks/template.html (3)
1-11: LGTM - Good security practice with SRI.The echarts CDN script includes Subresource Integrity (SRI) hash and crossorigin attribute, which is a good security practice to ensure the script hasn't been tampered with.
281-310: LGTM!The
processDatafunction correctly aggregates timing data through the node hierarchy with a clean recursive approach. ThetotalNodecreation provides a proper data structure for the Total row.
607-679: LGTM - Good accessibility support.The keyboard navigation implementation is comprehensive, supporting arrow keys for navigation and +/- for expand/collapse. The handling of edge cases (total row, boundaries) and smooth scrolling improves user experience.
tensorrt_llm/tools/layer_wise_benchmarks/runner_utils.py (3)
62-63: LGTM - Caching is appropriate for benchmarking.The
functools.cacheusage is appropriate here since the balanced selection functions are deterministic and called repeatedly with the same parameters during benchmarking runs.
390-427: LGTM - Well-structured context manager.The
replace_routing_method_ctxcorrectly implements the save/restore pattern with proper cleanup in thefinallyblock. The validation at lines 392-412 provides clear error messages for unsupported configurations.
65-93: LGTM - Comprehensive test coverage.The
test_get_balanced_selectionfunction provides thorough validation of the balanced selection algorithm, checking for duplicate experts, per-rank balance, and global expert balance across various parameter combinations.
|
PR_Github #25707 [ run ] triggered by Bot. Commit: |
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
|
/bot run |
|
PR_Github #25715 [ run ] triggered by Bot. Commit: |
|
PR_Github #25707 [ run ] completed with state |
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
Signed-off-by: Tailing Yuan <yuantailing@gmail.com>
|
/bot run |
|
PR_Github #25723 [ run ] triggered by Bot. Commit: |
|
PR_Github #25715 [ run ] completed with state |
|
PR_Github #25723 [ run ] completed with state |
Summary by CodeRabbit
New Features
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.
Description
Test Coverage
PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.