Skip to content

Conversation

@abukhoy
Copy link
Contributor

@abukhoy abukhoy commented Oct 22, 2025

This pull request is created for combining all the testing model configs in a single config json file.

Signed-off-by: Abukhoyer Shaik <abukhoye@qti.qualcomm.com>
@abukhoy abukhoy marked this pull request as ready for review October 22, 2025 11:24
Signed-off-by: Abukhoyer Shaik <abukhoye@qti.qualcomm.com>
# assert (pytorch_kv_tokens == pytorch_hf_tokens).all(), (
# "Tokens don't match for pytorch HF output and pytorch KV output"
# )
qeff_model = QEFFAutoModelForImageTextToText(model_hf, kv_offload=kv_offload)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should still maintain the check for pytorch kv vs pytorch hf, I think its commented because of some models we are not able to run the hf directly, we should skip the test only for those models, Can you check on this and add it back

qeff_model = QEFFAutoModelForImageTextToText(model_hf, kv_offload=kv_offload)

qeff_model.export()
# onnx_model_path = qeff_model.export()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keep the onnx test also do not remove it as we might enable it back in future

):
@pytest.mark.parametrize("model_name", test_intern_models)
@pytest.mark.parametrize("kv_offload", [True, False])
def test_image_text_to_text_intern_pytorch_vs_kv_vs_ort_vs_ai100(model_name, kv_offload):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we unify the intern with other vlms?

@abukhoy
Copy link
Contributor Author

abukhoy commented Nov 6, 2025

Hi @quic-rishinr
I will try to address your comments at the earliest. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants