Skip to content

Conversation

@LiliDeng
Copy link
Collaborator

@LiliDeng LiliDeng commented Nov 3, 2025

No description provided.

@LiliDeng LiliDeng requested a review from squirrelsc as a code owner November 3, 2025 16:09
"""Helper class to convert between different units"""

# Unit conversion factors (to base unit)
CONVERSIONS = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does it need this part? Can we assume the unit is consistent always?

"generated_by": "manual",
"generated_at": "2025-10-31T00:00:00"
},
"perf_nvme": {
Copy link
Member

@squirrelsc squirrelsc Nov 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The schema could be made more flexible and simple to avoid repeating the same information unnecessarily.
  2. Use Yaml than Json, it's easier to edit.
  3. We need to consider to run each test case 3 times, and then calculate values.
# how many times to calculate, can be overwrites in group and metrics levels
staticstics_times: 1
# default is 10%, and if a group has no it's own tolerance, it use this one.
error_threshold: 10% or 0.1
# array of groups 
# name just for a human readable name for logging.
- name: aaa
  # default is from global, and if a metrics has no it's own tolerance, it use this one.
  error_threshold:
  statistics_type: average
  staticstics_times:
  conditions:
  - name: vm_size
    # Type could be metadata, information, parameter.
    # metadata like test case name, test project name and so on, information is from the test result information, parameter is from we recent defined perf metrics type.
    type: information
    # value could be a string or a regexp.
    value: 
  metrics:
  # a group can apply to multiple metrics, so it's easy to reuse conditions and manage them.
  - name: metrics name
    # I think a value should be enough, the unit and which is better are defined in the metrics already.
    value: 
    # default average, statistics type: average, medium, max, min ...
    statistics_type: average
    # default is from group definition, and if a metrics has no it's own tolerance, it use this one.
    error_threshold:
    staticstics_times: 

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can I pass statistics_times into the test case times field so that each test runs the expected number of times?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I’m thinking we set it here and match times in the testcase section so everything works smoothly. We can always look into using a single value later if that makes sense.
With this setting, the notifier will only fail a test case when the time limit is actually hit.

return f"No criteria defined (actual: {actual_value})"


class PerfEvaluation(notifier.Notifier):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This notifier should follow the AI agent to modify a message. We can add one more field on test results to save all failed metrics, and also make the test case failed.

) -> None:
"""Log performance failure information"""
test_key = perf_message.test_case_name
self._log.warning(f"Performance failure in {test_key}: {reason}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't print warn level log.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants