Skip to content

Conversation

@dvrogozh
Copy link
Contributor

@dvrogozh dvrogozh commented Oct 7, 2025

This commit exposes torchcodec core library to be used by third party modules on the C++ level. The primary purpose is to allow non-CUDA device interfaces out-of-tree implementations. There are the following major changes:

  • Exposed TorchCodecConfig.cmake which defines torchcodec targets to be linked with

  • Provided Python level APIs to faciliate out-of-tree device interfaces work with torchcodec:

    • torchcodec.cmake_prefix_path - path which points to TorchCodecConfig.cmake configuration
    • torchcodec.variant - variant of the torchcodec library which was loaded, i.e. N in libtorchcodec_core{N}.so (currently ffmpeg_major_version)
    • torchcodec.core_library_path - full path of the loaded torchcodec core library
  • src/torchcodec/_core/ dropped from include paths to allow using of the core library out-of-tree

TorchCodecConfig.cmake has 2 working modes:

  • By default config works by checking available version of FFmpeg libraries via pkg-config and configures corresponding (single) version of torchcodec
  • Altenatively, if TORCHCODEC_FFMPEG{N}_INSTALL_PREFIX is set (N=4,5,6,7 - version of FFmpeg), then config defines torchcodec target corresponding to the specified FFmpeg version. Note that multiple prefixes can be specified at the same time allowing to build against few torchcodec versions at once.

Config will define TORCHCODEC_VARIANTS variable with value corresponding to FFmpeg major versions of available torchcodec core libraries. Further, config will also define torchcodec::ffmpeg${N} and torchcodec::core${N} targets where N takes values from TORCHCODEC_VARIANTS.

See the following repository for an actual out-of-tree device interface torchcodec plugin:

I suggest to pay attention on these:

CC: @scotts, @NicolasHug

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Oct 7, 2025
Copy link
Contributor

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dvrogozh , I left some questions and some comments below. It looks good overal, but I hope we can simplify and unify the logic with the one that already exists within our own cmake files.

On the #include path changes, I will have to pre-import your PR internall to verify that this isn't breaking our internal build (and to try to find internal workarounds if needed).

_pybind_ops: Optional[ModuleType] = None

variant = None
core_library_path = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you help me understand why in

https://github.com/dvrogozh/torchcodec-xpu/blob/59bfef60e9b41244e804c112729c42d9176ae368/src/torchcodec_xpu/__init__.py#L43

we only need to load the libtorchcodec_core?.so file, and not the other .so like libtorchcodec_pybind_ops?.so and libtorchcodec_custom_ops?.so

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's because torchcodec_xpu does not use any of TorchCodec Python/C++ APIs defined in libtorchcodec_pybind_ops?.so or libtorchcodec_custom_ops?.so. The only thing which torchcodec_xpu does is registering XPU device interface on the C++ level, i.e. calling analogue of:

static bool g_cuda = registerDeviceInterface(
DeviceInterfaceKey(torch::kCUDA),
[](const torch::Device& device) {
return new CudaDeviceInterface(device);
});

Note that if we will consider to support use cases when some application wants to link against TorchCodec C++ APIs, then libtorchcodec_pybind_ops?.so or libtorchcodec_custom_ops?.so or both might be needed. In this case the TorchCodecConfig.cmake will need to be updated to expose these targets as well.

@dvrogozh
Copy link
Contributor Author

On the #include path changes, I will have to pre-import your PR internall to verify that this isn't breaking our internal build (and to try to find internal workarounds if needed).

I've rebased the PR and resolved conflicts, but I will take a look and start address review comments later today or early next week. Please, let me know how #include story goes. If this change is ok (I hope so), then we may consider to make it separately to reduce the scope of this PR and simplify/focus review on the cmake aspects.

@meta-codesync
Copy link

meta-codesync bot commented Oct 24, 2025

@NicolasHug has imported this pull request. If you are a Meta employee, you can view this in D85451979.

@NicolasHug
Copy link
Contributor

Can confirm the header #include changes are OK internally! Yes, happy to consider a separate diff that does just that. Thank you!

@dvrogozh
Copy link
Contributor Author

Can confirm the header #include changes are OK internally! Yes, happy to consider a separate diff that does just that.

Thank you! Here is includes PR: #1002.

This commit exposes torchcodec core library to be used by third
party modules on the C++ level. The primary purpose is to allow
non-CUDA device interfaces out-of-tree implementations. There
are the following major changes:

* Exposed TorchCodecConfig.cmake which defines torchcodec
  targets to be linked with

* Provided Python level APIs to faciliate out-of-tree device
  interfaces work with torchcodec:

  * `torchcodec.cmake_prefix_path` - path which points to
    `TorchCodecConfig.cmake` configuration
  * `torchcodec.variant` - variant of the torchcodec library
    which was loaded, i.e. N in libtorchcodec_core{N}.so
    (currently ffmpeg_major_version)
  * `torchcodec.core_library_path` - full path of the loaded
    torchcodec core library

* `src/torchcodec/_core/` dropped from include paths to allow
  using of the core library out-of-tree

`TorchCodecConfig.cmake` has 2 working modes:

* By default config works by checking available version of
  FFmpeg libraries via `pkg-config` and configures corresponding
  (single) version of torchcodec
* Altenatively, if `TORCHCODEC_FFMPEG{N}_INSTALL_PREFIX` is set
  (`N=4,5,6,7` - version of FFmpeg), then config defines
  torchcodec target corresponding to the specified FFmpeg version.
  Note that multiple prefixes can be specified at the same time
  allowing to build against few torchcodec versions at once.

Config will define `TORCHCODEC_VARIANTS` variable with value
corresponding to FFmpeg major versions of available torchcodec
core libraries. Further, config will also define `torchcodec::ffmpeg${N}`
and `torchcodec::core${N}` targets where `N` takes values from
`TORCHCODEC_VARIANTS`.

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@gmail.com>
dvrogozh pushed a commit to dvrogozh/torchcodec-xpu that referenced this pull request Oct 30, 2025
See: meta-pytorch/torchcodec#938
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@gmail.com>
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@gmail.com>

tmp

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@gmail.com>
@dvrogozh
Copy link
Contributor Author

@NicolasHug, I've addressed all your comments and completed the 2nd iteration of the PR. Can you, please, help to review it again?

Copy link
Contributor

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR and the refac @dvrogozh , this looks good! I made some minor cosmetic changes, I'll merge it soon. Thank you!

@NicolasHug NicolasHug merged commit dc87228 into meta-pytorch:main Nov 10, 2025
57 of 60 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants