Skip to content

Commit 5dbd3ab

Browse files
authored
Returned VLM chat sample in CI. (#935)
Returned VLM chat sample in CI.
1 parent 8143634 commit 5dbd3ab

File tree

3 files changed

+14
-3
lines changed

3 files changed

+14
-3
lines changed

.github/workflows/causal_lm_cpp.yml

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -701,7 +701,7 @@ jobs:
701701
run: |
702702
source ./ov/setupvars.sh
703703
cmake -DCMAKE_BUILD_TYPE=Release -S ./ -B ./build/
704-
cmake --build ./build/ --config Release --target visual_language_chat -j
704+
cmake --build ./build/ --config Release --target visual_language_chat py_generate_pipeline -j
705705
- name: Download and convert a model and an image
706706
run: |
707707
source ./ov/setupvars.sh
@@ -716,6 +716,13 @@ jobs:
716716
&& timeout 120s ./build/samples/cpp/visual_language_chat/visual_language_chat ./miniCPM-V-2_6/ cat.jpg
717717
<<< $'What is on the image?\nWhat is special on the image?'
718718
719+
- name: Run python chat sample
720+
run: |
721+
source ./ov/setupvars.sh
722+
export PYTHONPATH=./build/:$PYTHONPATH
723+
printf 'What is on the image?\nWhat is special on the image?\n' > ./input.txt
724+
timeout 120s python ./samples/python/visual_language_chat/visual_language_chat.py ./miniCPM-V-2_6/ cat.jpg < input.txt > ./pred.txt
725+
719726
cpp-continuous-batching-ubuntu:
720727
runs-on: ubuntu-20.04-8-cores
721728
defaults:

samples/python/vlm_chat_sample/README.md renamed to samples/python/visual_language_chat/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ pip install --upgrade-strategy eager -r ../../requirements.txt
1616
## Run:
1717
[This image](https://github.com/openvinotoolkit/openvino_notebooks/assets/29454499/d5fbbd1a-d484-415c-88cb-9986625b7b11) can be used as a sample image.
1818

19-
`vlm_chat_sample.py ./miniCPM-V-2_6/ 319483352-d5fbbd1a-d484-415c-88cb-9986625b7b11.jpg`
19+
`visual_language_chat.py ./miniCPM-V-2_6/ 319483352-d5fbbd1a-d484-415c-88cb-9986625b7b11.jpg`
2020

2121

2222
Discrete GPUs (dGPUs) usually provide better performance compared to CPUs. It is recommended to run larger models on a dGPU with 32GB+ RAM. # TODO: examples of larger models

samples/python/vlm_chat_sample/vlm_chat_sample.py renamed to samples/python/visual_language_chat/visual_language_chat.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,12 +54,16 @@ def main():
5454
config.max_new_tokens = 100
5555

5656
pipe.start_chat()
57+
prompt = input('question:\n')
58+
pipe(prompt, image=image, generation_config=config, streamer=streamer)
59+
print('\n----------')
60+
5761
while True:
5862
try:
5963
prompt = input('question:\n')
6064
except EOFError:
6165
break
62-
pipe(prompt, image=image, generation_config=config, streamer=streamer)
66+
pipe(prompt, generation_config=config, streamer=streamer)
6367
print('\n----------')
6468
pipe.finish_chat()
6569

0 commit comments

Comments
 (0)