-
Notifications
You must be signed in to change notification settings - Fork 302
Samples for VLM video input. #3050
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
popovaan
wants to merge
43
commits into
openvinotoolkit:master
Choose a base branch
from
popovaan:video_to_text_sample
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+334
−14
Open
Changes from 21 commits
Commits
Show all changes
43 commits
Select commit
Hold shift + click to select a range
ca857ff
Video to text python sample.
popovaan 3b3c69d
Sample test.
popovaan b4a84f7
Update samples/python/visual_language_chat/video_to_text_chat.py
popovaan 29d78c4
Added c++ sample.
popovaan 1a8944a
Attempt to add opencv build to ga workflow.
popovaan 1a0d25c
Revert "Attempt to add opencv build to ga workflow."
popovaan 4d070ab
Used FetchContent to add opencv.
popovaan a8fa911
Corrected test.
popovaan bdc6940
Convert path to string().
popovaan 735060e
Updated readme.
popovaan 5dfdcf7
Set 8 frames.
popovaan 43c76c9
Update samples/cpp/visual_language_chat/README.md
popovaan d77276f
Fixed opencv version, minor corrections.
popovaan cabd763
Added assert.
popovaan a1c1290
Merge branch 'master' into video_to_text_sample
popovaan 46e7d5d
Increase samples build timeout.
popovaan 9f4e6b9
Merge branch 'video_to_text_sample' of https://github.com/popovaan/op…
popovaan 5b6044d
Cmake corrected.
popovaan 54cebe6
Attempt to fix ci.
popovaan e8cb51e
Fix on win.
popovaan 58b8be4
Merge branch 'master' into video_to_text_sample
popovaan 6dac23e
Apply suggestions from code review
popovaan faffe59
Attempt too fix error.
popovaan 6e749ff
Attempt to fix cmake.
popovaan f71bb59
Attempt to fix.
popovaan cc49e6c
Merge master.
popovaan bbf700c
Change video.
popovaan 40ac708
Set WITH_FFMPEG.
popovaan ace9a6a
Temporarily remove launching of cpp sample.
popovaan f8a3d0d
Returned cpp sample launch.
popovaan 87c58f6
Add install ffmpeg.
popovaan 5440d42
Minor correction.
popovaan 96d4fa2
Added libs install needed by ffmpeg.
popovaan 21127ed
Minor correction.
popovaan 95ce2a3
Add debug info.
popovaan 8f4457e
Attempt to fix.
popovaan b7c8dd2
Applied comments, removed debug print.
popovaan e11adbb
Attempt to fix.
popovaan b274aea
Increase timeout.
popovaan d423f8e
Removed not needed code.
popovaan 931a072
Increase timeout.
popovaan e9fd0ce
Update samples/cpp/visual_language_chat/video_to_text_chat.cpp
popovaan 2883d5a
Increase timeout for building win samples.
popovaan File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
126 changes: 126 additions & 0 deletions
126
samples/cpp/visual_language_chat/video_to_text_chat.cpp
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,126 @@ | ||
| // Copyright (C) 2024 Intel Corporation | ||
popovaan marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| // SPDX-License-Identifier: Apache-2.0 | ||
|
|
||
| #include <openvino/genai/visual_language/pipeline.hpp> | ||
| #include <opencv2/core.hpp> | ||
| #include <opencv2/videoio.hpp> | ||
| #include <iostream> | ||
| #include <filesystem> | ||
|
|
||
| namespace fs = std::filesystem; | ||
|
|
||
| std::vector<size_t> make_indices(size_t total_frames, size_t num_frames) { | ||
| std::vector<size_t> indices; | ||
| indices.reserve(num_frames); | ||
|
|
||
| auto step = float(total_frames) / num_frames; | ||
|
|
||
| for (size_t i = 0; i < num_frames; ++i) { | ||
| size_t idx = std::min(size_t(i * step), total_frames - 1); | ||
| indices.push_back(idx); | ||
| } | ||
|
|
||
| return indices; | ||
| } | ||
|
|
||
| ov::Tensor load_video(const std::filesystem::path& video_path, size_t num_frames = 8) { | ||
| cv::VideoCapture cap(video_path.string()); | ||
|
|
||
| if (!cap.isOpened()) { | ||
| OPENVINO_THROW("Could not open the video file."); | ||
| } | ||
| size_t total_num_frames = cap.get(cv::CAP_PROP_FRAME_COUNT); | ||
| auto indices = make_indices(total_num_frames, num_frames); | ||
|
|
||
| std::vector<cv::Mat> frames; | ||
| cv::Mat frame; | ||
| size_t width = cap.get(cv::CAP_PROP_FRAME_WIDTH); | ||
| size_t height = cap.get(cv::CAP_PROP_FRAME_HEIGHT); | ||
popovaan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| ov::Tensor video_tensor(ov::element::u8, ov::Shape{num_frames, height, width, 3}); | ||
| auto video_tensor_data = video_tensor.data<uint8_t>(); | ||
|
|
||
| size_t frame_idx = 0; | ||
| while (cap.read(frame)) { | ||
| OPENVINO_ASSERT(frame.cols == width && frame.rows == height && frame.channels() == 3); | ||
| if (std::find(indices.begin(), indices.end(), frame_idx) != indices.end()) { | ||
| memcpy(video_tensor_data, frame.data, frame.total() * 3 * sizeof(uint8_t)); | ||
| video_tensor_data += frame.total() * 3; | ||
| } | ||
| frame_idx++; | ||
| } | ||
| OPENVINO_ASSERT(frame_idx == total_num_frames); | ||
|
|
||
| return video_tensor; | ||
| } | ||
|
|
||
| std::vector<ov::Tensor> load_videos(const std::filesystem::path& input_path) { | ||
| if (input_path.empty() || !fs::exists(input_path)) { | ||
| throw std::runtime_error{"Path to images is empty or does not exist."}; | ||
popovaan marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| } | ||
| if (fs::is_directory(input_path)) { | ||
| std::set<fs::path> sorted_videos{fs::directory_iterator(input_path), fs::directory_iterator()}; | ||
| std::vector<ov::Tensor> videos; | ||
| for (const fs::path& dir_entry : sorted_videos) { | ||
| videos.push_back(load_video(dir_entry)); | ||
| } | ||
| return videos; | ||
| } | ||
| return {load_video(input_path)}; | ||
| } | ||
|
|
||
| bool print_subword(std::string&& subword) { | ||
| return !(std::cout << subword << std::flush); | ||
| } | ||
|
|
||
| int main(int argc, char* argv[]) try { | ||
| if (argc < 3 || argc > 4) { | ||
| throw std::runtime_error(std::string{"Usage "} + argv[0] + " <MODEL_DIR> <VIDEO_FILE OR DIR_WITH_VIDEOS> <DEVICE>"); | ||
popovaan marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| } | ||
|
|
||
| std::vector<ov::Tensor> videos = load_videos(argv[2]); | ||
|
|
||
| // GPU and NPU can be used as well. | ||
| // Note: If NPU is selected, only language model will be run on NPU | ||
| std::string device = (argc == 4) ? argv[3] : "CPU"; | ||
| ov::AnyMap enable_compile_cache; | ||
| if (device == "GPU") { | ||
| // Cache compiled models on disk for GPU to save time on the | ||
| // next run. It's not beneficial for CPU. | ||
| enable_compile_cache.insert({ov::cache_dir("vlm_cache")}); | ||
| } | ||
| ov::genai::VLMPipeline pipe(argv[1], device, enable_compile_cache); | ||
|
|
||
| ov::genai::GenerationConfig generation_config; | ||
| generation_config.max_new_tokens = 100; | ||
|
|
||
| std::string prompt; | ||
|
|
||
| pipe.start_chat(); | ||
| std::cout << "question:\n"; | ||
|
|
||
| std::getline(std::cin, prompt); | ||
| pipe.generate(prompt, | ||
| ov::genai::videos(videos), | ||
| ov::genai::generation_config(generation_config), | ||
| ov::genai::streamer(print_subword)); | ||
| std::cout << "\n----------\n" | ||
| "question:\n"; | ||
| while (std::getline(std::cin, prompt)) { | ||
| pipe.generate(prompt, | ||
| ov::genai::generation_config(generation_config), | ||
| ov::genai::streamer(print_subword)); | ||
| std::cout << "\n----------\n" | ||
| "question:\n"; | ||
| } | ||
| pipe.finish_chat(); | ||
| } catch (const std::exception& error) { | ||
| try { | ||
| std::cerr << error.what() << '\n'; | ||
| } catch (const std::ios_base::failure&) {} | ||
| return EXIT_FAILURE; | ||
| } catch (...) { | ||
| try { | ||
| std::cerr << "Non-exception object thrown\n"; | ||
| } catch (const std::ios_base::failure&) {} | ||
| return EXIT_FAILURE; | ||
| } | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
100 changes: 100 additions & 0 deletions
100
samples/python/visual_language_chat/video_to_text_chat.py
Wovchena marked this conversation as resolved.
Show resolved
Hide resolved
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,100 @@ | ||
| #!/usr/bin/env python3 | ||
| # Copyright (C) 2024 Intel Corporation | ||
popovaan marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| # SPDX-License-Identifier: Apache-2.0 | ||
|
|
||
|
|
||
| import argparse | ||
| import numpy as np | ||
| import cv2 | ||
| import openvino_genai | ||
| from openvino import Tensor | ||
| from pathlib import Path | ||
|
|
||
|
|
||
| def streamer(subword: str) -> bool: | ||
| ''' | ||
| Args: | ||
| subword: sub-word of the generated text. | ||
| Returns: Return flag corresponds whether generation should be stopped. | ||
| ''' | ||
| print(subword, end='', flush=True) | ||
|
|
||
| # No value is returned as in this example we don't want to stop the generation in this method. | ||
| # "return None" will be treated the same as "return openvino_genai.StreamingStatus.RUNNING". | ||
|
|
||
|
|
||
| def read_video(path: str, num_frames: int = 8) -> Tensor: | ||
| ''' | ||
| Args: | ||
| path: The path to the video. | ||
| num_frames: Number of frames sampled from the video. | ||
| Returns: the ov.Tensor containing the video. | ||
| ''' | ||
| cap = cv2.VideoCapture(path) | ||
|
|
||
| frames = [] | ||
|
|
||
| while cap.isOpened(): | ||
| ret, frame = cap.read() | ||
| if not ret: | ||
| break | ||
|
|
||
| frames.append(np.array(frame)) | ||
|
|
||
| indices = np.arange(0, len(frames), len(frames) / num_frames).astype(int) | ||
popovaan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| frames = [frames[i] for i in indices] | ||
popovaan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| return Tensor(frames) | ||
|
|
||
|
|
||
| def read_videos(path: str) -> list[Tensor]: | ||
| entry = Path(path) | ||
| if entry.is_dir(): | ||
| return [read_video(str(file)) for file in sorted(entry.iterdir())] | ||
| return [read_video(path)] | ||
|
|
||
|
|
||
| def main(): | ||
| parser = argparse.ArgumentParser() | ||
| parser.add_argument('model_dir', help="Path to the model directory") | ||
| parser.add_argument('video_dir', help="Path to a video file.") | ||
| parser.add_argument('device', nargs='?', default='CPU', help="Device to run the model on (default: CPU)") | ||
| args = parser.parse_args() | ||
|
|
||
| videos = read_videos(args.video_dir) | ||
|
|
||
| # GPU and NPU can be used as well. | ||
| # Note: If NPU is selected, only the language model will be run on the NPU. | ||
| enable_compile_cache = dict() | ||
| if args.device == "GPU": | ||
| # Cache compiled models on disk for GPU to save time on the next run. | ||
| # It's not beneficial for CPU. | ||
| enable_compile_cache["CACHE_DIR"] = "vlm_cache" | ||
|
|
||
| pipe = openvino_genai.VLMPipeline(args.model_dir, args.device, **enable_compile_cache) | ||
|
|
||
| config = openvino_genai.GenerationConfig() | ||
| config.max_new_tokens = 100 | ||
|
|
||
| pipe.start_chat() | ||
| prompt = input('question:\n') | ||
| pipe.generate(prompt, videos=videos, generation_config=config, streamer=streamer) | ||
|
|
||
| while True: | ||
| try: | ||
| prompt = input("\n----------\n" | ||
| "question:\n") | ||
| except EOFError: | ||
| break | ||
| pipe.generate(prompt, generation_config=config, streamer=streamer) | ||
| pipe.finish_chat() | ||
|
|
||
|
|
||
| if __name__ == '__main__': | ||
| main() | ||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.