-
Notifications
You must be signed in to change notification settings - Fork 59
[Test]: Models test configs in a single Config file #596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Abukhoyer Shaik <abukhoye@qti.qualcomm.com>
Signed-off-by: Abukhoyer Shaik <abukhoye@qti.qualcomm.com>
| # assert (pytorch_kv_tokens == pytorch_hf_tokens).all(), ( | ||
| # "Tokens don't match for pytorch HF output and pytorch KV output" | ||
| # ) | ||
| qeff_model = QEFFAutoModelForImageTextToText(model_hf, kv_offload=kv_offload) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should still maintain the check for pytorch kv vs pytorch hf, I think its commented because of some models we are not able to run the hf directly, we should skip the test only for those models, Can you check on this and add it back
| qeff_model = QEFFAutoModelForImageTextToText(model_hf, kv_offload=kv_offload) | ||
|
|
||
| qeff_model.export() | ||
| # onnx_model_path = qeff_model.export() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
keep the onnx test also do not remove it as we might enable it back in future
| ): | ||
| @pytest.mark.parametrize("model_name", test_intern_models) | ||
| @pytest.mark.parametrize("kv_offload", [True, False]) | ||
| def test_image_text_to_text_intern_pytorch_vs_kv_vs_ort_vs_ai100(model_name, kv_offload): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we unify the intern with other vlms?
|
Hi @quic-rishinr |
This pull request is created for combining all the testing model configs in a single config json file.