Skip to content

Conversation

@jiridanek
Copy link
Member

@jiridanek jiridanek commented Nov 19, 2025

https://issues.redhat.com/browse/RHAIENG-2042

Description

This is to align ODH with AIPCC bases which use RHEL. Therefore, ODH should use CentOS Stream.

Related to

Requires

How Has This Been Tested?

Self checklist (all need to be checked):

  • Ensure that you have run make test (gmake on macOS) before asking for review
  • Changes to everything except Dockerfile.konflux files should be done in odh/notebooks and automatically synced to rhds/notebooks. For Konflux-specific changes, modify Dockerfile.konflux files directly in rhds/notebooks as these require special attention in the downstream repository and flow to the upcoming RHOAI release.

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

Release Notes

  • Chores
    • Updated container base images across all services from UBI9 to c9s variants for Python 3.12 environments
    • Simplified container build process by removing external repository configuration steps
    • Updated file permission validation tests

✏️ Tip: You can customize this high-level summary in your review settings.

@openshift-ci openshift-ci bot requested review from atheo89 and dibryant November 19, 2025 11:39
@github-actions github-actions bot added the review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel label Nov 19, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 19, 2025

Walkthrough

This pull request updates Docker build configurations across multiple container images to migrate from UBI9 base images to custom C9S-based images and removes explicit UBI repository injection stages from Dockerfiles. Updates also include test file adjustments for file permission assertions.

Changes

Cohort / File(s) Summary
CPU Build Configuration — UBI to C9S Migration
codeserver/ubi9-python-3.12/build-args/cpu.conf, jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf, jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf, jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf, runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf, runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf
BASE_IMAGE updated from registry.access.redhat.com/ubi9/python-312:latest to quay.io/opendatahub/odh-base-image-cpu-py312-c9s:latest
CUDA Build Configuration — UBI9 to C9S Variant
jupyter/minimal/ubi9-python-3.12/build-args/cuda.conf, jupyter/pytorch+llmcompressor/ubi9-python-3.12/build-args/cuda.conf, jupyter/pytorch/ubi9-python-3.12/build-args/cuda.conf, jupyter/tensorflow/ubi9-python-3.12/build-args/cuda.conf
BASE_IMAGE updated from ...ubi9:v12.8 to ...c9s:v12.8 (CUDA variant)
ROCM Build Configuration — UBI9 to C9S Variant
jupyter/minimal/ubi9-python-3.12/build-args/rocm.conf, jupyter/rocm/pytorch/ubi9-python-3.12/build-args/rocm.conf, jupyter/rocm/tensorflow/ubi9-python-3.12/build-args/rocm.conf, runtimes/rocm-tensorflow/ubi9-python-3.12/build-args/rocm.conf
BASE_IMAGE updated from ...ubi9:v6.3 to ...c9s:v6.3 (ROCM variant)
CPU Dockerfile — Remove UBI Repository Stage
codeserver/ubi9-python-3.12/Dockerfile.cpu, jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu, jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu, jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu, runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu, runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu
Removed FROM registry.access.redhat.com/ubi9/ubi AS ubi-repos stage and associated COPY --from=ubi-repos /etc/yum.repos.d/ubi.repo instructions
CUDA Dockerfile — Remove UBI Repository Stage
jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda, jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda, jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda, jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda, rstudio/rhel9-python-3.12/Dockerfile.cuda, runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda, runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda, runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda
Removed FROM registry.access.redhat.com/ubi9/ubi AS ubi-repos stage and associated COPY --from=ubi-repos /etc/yum.repos.d/ubi.repo instructions
ROCM Dockerfile — Remove UBI Repository Stage
jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm, jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm, jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm, runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm, runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
Removed FROM registry.access.redhat.com/ubi9/ubi AS ubi-repos stage and associated COPY --from=ubi-repos /etc/yum.repos.d/ubi.repo instructions
RStudio CPU Dockerfile — Remove UBI Repository Stage
rstudio/rhel9-python-3.12/Dockerfile.cpu
Removed ubi-repos stage and associated COPY instructions for ubi.repo and RPM-GPG-KEY-redhat-release
Test File Updates
tests/containers/base_image_test.py
Updated type annotation for directories_to_check from list[str] to list[list[str]], adjusted stat command format from --format='%a:%g:%u' to --format='%a:%g' to exclude UID, and updated assertion to compare only permissions and GID with explanatory comment that UID is ignored on OpenShift

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~15 minutes

  • Pattern consistency: Verify that all base image references are correctly updated to their C9S counterparts across CPU, CUDA, and ROCM variants while preserving version tags
  • Dockerfile removals: Confirm that UBI repository injection stages and COPY instructions are removed consistently across all affected Dockerfiles without disrupting other build stages
  • Test assertions: Validate that the permission test logic correctly handles the reduced stat output (permissions and GID only) and that the UID exclusion rationale is appropriate for the test environment

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: updating ODH base image references from UBI9 to c9s (CentOS Stream) across multiple build configurations and Dockerfiles.
Description check ✅ Passed The description provides context (AIPCC alignment, Red Hat issue reference), explains the rationale, lists prerequisites, and includes testing confirmations. However, the 'Testing instructions' and 'Developer verification' merge criteria remain unchecked.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot added size/m and removed size/m labels Nov 19, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 01c24b8 and 0f6ca89.

📒 Files selected for processing (14)
  • codeserver/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/pytorch/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/tensorflow/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/rocm-tensorflow/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (36)
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
🔇 Additional comments (13)
runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: LGTM: CPU c9s base image reference is consistent

Name, Python version, and c9s qualifier all look correct for the minimal runtime CPU base.

jupyter/minimal/ubi9-python-3.12/build-args/rocm.conf (1)

1-1: LGTM: ROCm c9s base image and version tag look correct

ROCm, Python 3.12, c9s, and v6.3 tag align with the PR’s migration pattern.

jupyter/minimal/ubi9-python-3.12/build-args/cuda.conf (1)

1-1: LGTM: CUDA c9s base image reference is correct

CUDA, Python 3.12, c9s, and v12.8 tag are all consistent with the intended base-image family.

runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: LGTM: Datascience CPU runtime now aligned on c9s base

Uses the same cpu-py312-c9s:latest image as other CPU configs, which keeps runtimes consistent.

jupyter/rocm/pytorch/ubi9-python-3.12/build-args/rocm.conf (1)

1-1: LGTM: Jupyter ROCm PyTorch correctly switched to c9s base

ROCm c9s image and v6.3 tag match the pattern used in other ROCm build-args.

jupyter/tensorflow/ubi9-python-3.12/build-args/cuda.conf (1)

1-1: LGTM: TensorFlow CUDA build now on shared c9s base

CUDA py312 c9s v12.8 image aligns TensorFlow with the other CUDA-based images.

jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: LGTM: TrustyAI CPU config aligned to common c9s CPU base

Uses the shared cpu-py312-c9s:latest image, which simplifies maintenance across CPU images.

jupyter/pytorch/ubi9-python-3.12/build-args/cuda.conf (1)

1-1: LGTM: PyTorch CUDA build-args correctly migrated to c9s

CUDA py312 c9s v12.8 image is consistent with the other CUDA-based Jupyter images.

jupyter/pytorch+llmcompressor/ubi9-python-3.12/build-args/cuda.conf (1)

1-1: LGTM!

The CUDA base image variant has been correctly updated from ubi9 to c9s while preserving the v12.8 version tag.

jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf (1)

1-2: LGTM!

The addition of the explanatory comment and the base image migration to c9s are both correct.

jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: LGTM!

The base image migration to c9s is correct and consistent with other CPU-based configurations in this PR.

runtimes/rocm-tensorflow/ubi9-python-3.12/build-args/rocm.conf (1)

1-1: LGTM!

The ROCm base image variant has been correctly updated from ubi9 to c9s while preserving the v6.3 version tag.

jupyter/rocm/tensorflow/ubi9-python-3.12/build-args/rocm.conf (1)

1-1: LGTM!

The ROCm base image migration to c9s is correct and consistent with the other ROCm configuration in this PR.

@openshift-ci openshift-ci bot added size/m and removed size/m labels Nov 19, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm (1)

35-39: Update comment to clarify repository source after c9s migration.

Line 35 states "Inject the official UBI 9 repository configuration" but if BASE_IMAGE is being changed to c9s (per PR objectives), this comment becomes misleading. The configuration is still from UBI 9, but it's being used with a non-UBI (c9s) base image. Clarify that we're sourcing UBI repository configuration for use with the AIPCC c9s base image.

Suggested comment update:

-# Inject the official UBI 9 repository configuration into the AIPCC base image.
-# The Quay-based AIPCC image is "repo-less" by default (https://gitlab.com/redhat/rhel-ai/core/base-images/app#repositories), so dnf cannot upgrade or install packages.
-# By copying ubi.repo from the public UBI 9 image, we enable package management for upgrades and installations.
+# Inject the official UBI 9 repository configuration into the AIPCC c9s base image.
+# The Quay-based AIPCC image is "repo-less" by default (https://gitlab.com/redhat/rhel-ai/core/base-images/app#repositories), so dnf cannot upgrade or install packages.
+# By copying ubi.repo and GPG keys from the public UBI 9 image, we enable package management and verification for upgrades and installations.
jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm (1)

35-39: Update comment to clarify repository source after c9s migration.

Same as PyTorch variant: Line 35 states "Inject the official UBI 9 repository configuration" but if BASE_IMAGE is being changed to c9s (per PR objectives), this comment becomes misleading. Clarify that we're sourcing UBI repository configuration for use with the AIPCC c9s base image.

Suggested comment update (consistent with PyTorch variant):

-# Inject the official UBI 9 repository configuration into the AIPCC base image.
-# The Quay-based AIPCC image is "repo-less" by default (https://gitlab.com/redhat/rhel-ai/core/base-images/app#repositories), so dnf cannot upgrade or install packages.
-# By copying ubi.repo from the public UBI 9 image, we enable package management for upgrades and installations.
+# Inject the official UBI 9 repository configuration into the AIPCC c9s base image.
+# The Quay-based AIPCC image is "repo-less" by default (https://gitlab.com/redhat/rhel-ai/core/base-images/app#repositories), so dnf cannot upgrade or install packages.
+# By copying ubi.repo and GPG keys from the public UBI 9 image, we enable package management and verification for upgrades and installations.
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (1)

14-27: Remove redundant ARG TARGETARCH declaration.

Line 27 duplicates the ARG TARGETARCH declaration from line 14. While functionally harmless, this reduces clarity.

 ARG TARGETARCH
 
 WORKDIR /opt/app-root/bin
@@ -24,8 +24,6 @@ COPY --from=ubi-repos /etc/yum.repos.d/ubi.repo /etc/yum.repos.d/ubi.repo
 COPY --from=ubi-repos /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
 
 ARG TARGETARCH

Remove the second declaration at line 27.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0f6ca89 and 525a95c.

📒 Files selected for processing (20)
  • codeserver/ubi9-python-3.12/Dockerfile.cpu (2 hunks)
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm (1 hunks)
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (1 hunks)
  • jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda (1 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm (1 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm (1 hunks)
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda (1 hunks)
  • jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (1 hunks)
  • rstudio/rhel9-python-3.12/Dockerfile.cpu (1 hunks)
  • rstudio/rhel9-python-3.12/Dockerfile.cuda (1 hunks)
  • runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (1 hunks)
  • runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu (1 hunks)
  • runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (1 hunks)
  • runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda (1 hunks)
  • runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm (1 hunks)
  • runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm (1 hunks)
  • runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (49)
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (cuda-rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: build (rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
🔇 Additional comments (20)
jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm (1)

23-23: Approve: RPM-GPG-KEY addition for signature verification.

The added COPY instruction correctly brings the RPM-GPG-KEY from the ubi-repos stage, enabling package signature verification for dnf operations. This complements the existing ubi.repo copy (line 22) and aligns with the migration to c9s base images where the target image is repo-less by default. The syntax and destination path follow the established pattern.

Verify that the destination path /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release is consistent across all Dockerfiles updated in this PR, especially other ROCM, CUDA, and Jupyter variants mentioned in the summary.

runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm (1)

23-23: Good addition for package signature verification.

The addition of the RPM-GPG-KEY-redhat-release COPY is appropriate and enables dnf to verify package signatures when the base image switches to c9s. This aligns with the pattern used across other Dockerfiles in the PR.

Verify that dnf is configured (or defaults) to use this GPG key for package signature verification during the dnf upgrade and dnf install commands at lines 39 and 48. If GPG verification requires explicit configuration, ensure it's in place.

jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm (2)

38-39: Approve: GPG key copy enables secure package verification.

Adding the RPM-GPG-KEY-redhat-release copy alongside the UBI repository config is the correct approach. This enables dnf to verify package signatures from UBI repositories even when running on a c9s base image (both are RHEL 9 / EL9 derivatives). The change is secure and necessary for the base image migration.


4-4: Verify that build-args files have been updated to use c9s BASE_IMAGE.

Ensure that the corresponding build-args files (referenced in PR objectives as residing in jupyter/rocm/pytorch/ubi9-python-3.12/build-args/) have been updated to reference c9s variants of the ROCm base image instead of ubi9.

jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm (2)

38-39: Approve: GPG key copy enables secure package verification.

Adding the RPM-GPG-KEY-redhat-release copy alongside the UBI repository config is the correct approach for the c9s base image migration. Both CentOS Stream 9 and RHEL 9 share the EL9 base, making UBI repo configurations and GPG keys compatible.


4-4: Verify that build-args files have been updated to use c9s BASE_IMAGE.

Ensure that the corresponding build-args files (referenced in PR objectives) have been updated to reference c9s variants of the ROCm base image instead of ubi9.

runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm (1)

23-23: GPG key provisioning placement is sound.

The addition of the RPM-GPG-KEY-redhat-release copy following the ubi.repo copy is correctly positioned to ensure signature verification is available for subsequent package operations.

jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (1)

54-54: Proper positioning for signature verification.

The GPG key copy is correctly placed after repository configuration and before package upgrade operations.

runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda (1)

25-25: GPG key copy correctly positioned in cuda-base stage.

The addition follows the pattern established across the repository and ensures signature verification is available for package operations.

runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (1)

25-25: GPG key copy is correctly positioned.

The addition follows the established pattern across the repository.

runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu (1)

23-23: GPG key copy is correctly positioned before package operations.

The addition ensures signature verification is available during subsequent dnf operations.

jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (1)

23-23: GPG key provisioning is correctly positioned.

The copy follows the ubi.repo provisioning and precedes package upgrade operations.

jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda (1)

25-25: GPG key copy is correctly positioned in cuda-base stage.

The addition follows the established pattern and ensures signature verification availability.

jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda (1)

41-41: GPG key addition is correctly positioned.

The copy follows repository configuration and precedes package operations that require signature verification.

To ensure the changes are fully functional, please verify that the ubi-repos base image (registry.access.redhat.com/ubi9/ubi) contains the file /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release at the specified path. This file should be present in standard UBI9 images, but it's worth confirming as part of the build validation.

runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda (1)

26-27: Consistent GPG key injection in cuda-base stage.

Same pattern as other files—GPG key properly positioned after repository configuration and before dnf operations.

jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (1)

40-41: GPG key properly injected into cuda-base stage.

Placement and pattern consistent with other Dockerfiles in the PR.

runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (1)

24-25: GPG key properly positioned in cuda-base stage.

Consistent with the pattern across all Dockerfiles in this PR.

codeserver/ubi9-python-3.12/Dockerfile.cpu (2)

24-25: GPG key injected into rpm-base stage for package verification.

Correctly positioned after repository configuration and before the dnf package fetch operation at line 36.


85-86: GPG key injected into cpu-base stage for package verification.

Correctly positioned after repository configuration and before the subsequent dnf operations at lines 102 and 111.

jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (1)

63-64: I'll verify the GPG key path using web search and check the codebase for similar patterns.

GPG key path verified—code change is correct.

The path /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release is the standard location for Red Hat release signatures in RHEL 9 and UBI 9, and UBI 9 repository configurations use this same path for signature verification. The COPY instruction correctly ensures the dnf operations can verify package signatures in the cpu-base stage.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/containers/base_image_test.py (1)

233-241: Consider removing unused UID data.

Since UID is no longer checked (lines 246-250), the following are now unused:

  • expected_uid variable (line 233)
  • The fourth element (UID) in each list entry (lines 237-238, 241)

Apply this diff to clean up the unused data:

-        expected_uid = "1001"  # default
         expected_gid = "0"  # root
         # Directories to assert permissions and ownerships as we did in ODS-CI
         directories_to_check: list[list[str]] = [
-            [f"{app_root_path}/lib", "775", expected_gid, expected_uid],
+            [f"{app_root_path}/lib", "775", expected_gid],
         ]
         if not utils.is_rstudio_image(image):
             # RStudio image doesn't have '/opt/app-root/share' directory
-            directories_to_check.append([f"{app_root_path}/share", "775", expected_gid, expected_uid])
+            directories_to_check.append([f"{app_root_path}/share", "775", expected_gid])
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 525a95c and 58d354a.

📒 Files selected for processing (21)
  • codeserver/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • rstudio/rhel9-python-3.12/Dockerfile.cpu (0 hunks)
  • rstudio/rhel9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • tests/containers/base_image_test.py (2 hunks)
💤 Files with no reviewable changes (20)
  • runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu
  • runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
  • codeserver/ubi9-python-3.12/Dockerfile.cpu
  • rstudio/rhel9-python-3.12/Dockerfile.cpu
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
  • runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
  • jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
  • runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm
  • rstudio/rhel9-python-3.12/Dockerfile.cuda
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm
  • runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda
  • runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
  • runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
  • jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (22)
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: build (cuda-rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: build (rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
🔇 Additional comments (2)
tests/containers/base_image_test.py (2)

236-236: LGTM! Type annotation corrected.

The type annotation fix from list[str] to list[list[str]] correctly reflects the actual data structure, which is a list of lists containing path, permissions, gid, and uid values.


246-250: LGTM! UID check removal aligns with OpenShift security model.

The change to ignore UID and check only permissions and GID is appropriate for OpenShift environments, where containers run with random UIDs (for security) but deterministic GIDs. This ensures the test remains stable across different OpenShift deployments.

@jiridanek
Copy link
Member Author

Error: unable to copy from source docker://quay.io/sclorg/python-312-c9s:c9s: choosing an image from manifest list docker://quay.io/sclorg/python-312-c9s:c9s: no image found in manifest list for architecture "s390x", variant "", OS "linux"

Well, not a huge problem; we can start from centos image and install python ourselves, by simply stealing from either sclorg or aipcc. Either way, it's maybe for the best, because that way we can only install what AIPCC preinstalls and nothing extra. That's good for fast(er) feedback for RHDS work.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: Same :latest tag issue applies here.

This file has the same critical blocker as flagged in codeserver/ubi9-python-3.12/build-args/cpu.conf: the :latest tag doesn't exist during PR testing (only :on-pr-{{revision}} tags are created). Builds will fail when attempting to pull this base image during PR validation.

Ensure the same resolution strategy applies to all affected config files.

codeserver/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: Critical :latest tag issue still present.

The previously flagged critical blocker remains unresolved: this :latest tag doesn't exist during PR testing and will cause build failures. Confirm the resolution strategy with maintainers before merge.

jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: Same :latest tag issue applies here.

This file has the same critical blocker as flagged in codeserver/ubi9-python-3.12/build-args/cpu.conf: the :latest tag doesn't exist during PR testing (only :on-pr-{{revision}} tags are created). Builds will fail when attempting to pull this base image during PR validation.

Ensure the same resolution strategy applies to all affected config files.

🧹 Nitpick comments (1)
tests/containers/base_image_test.py (1)

246-250: Clarify the comment wording and consider removing unused UID data.

The comment on line 246 is confusing. It states "it's the gid that is nonrandom on openshift," but if gid is predictable (nonrandom), checking it makes sense. I believe you mean "uid is random/assigned by OpenShift, but gid is predictable," which is why you're removing UID from the check.

Additionally, expected_uid (line 233) is still defined and added to the data structure (lines 237, 241) but is no longer used in assertions. Consider either removing it from the data structure or adding a comment explaining why it's retained.

Apply this diff to clarify the comment:

-                    # ignore `:%u`, it does not matter what the uid is, it's the gid that is nonrandom on openshift
+                    # ignore `:%u` - UID is assigned by OpenShift and varies; only GID should be validated
                     _, output = container.exec(["stat", "--format='%a:%g'", f"{item[0]}"])

Also, please verify that skipping UID validation is appropriate for c9s base images in OpenShift environments. If UID is truly unpredictable in the new image context, consider documenting this behavior change or removing expected_uid from the data structure entirely.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bd0d307 and c65cb13.

📒 Files selected for processing (35)
  • codeserver/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • codeserver/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/pytorch/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/tensorflow/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • rstudio/rhel9-python-3.12/Dockerfile.cpu (0 hunks)
  • rstudio/rhel9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • runtimes/rocm-tensorflow/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • tests/containers/base_image_test.py (2 hunks)
💤 Files with no reviewable changes (20)
  • jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
  • runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu
  • runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu
  • runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
  • jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
  • runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm
  • jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda
  • runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
  • rstudio/rhel9-python-3.12/Dockerfile.cpu
  • runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • rstudio/rhel9-python-3.12/Dockerfile.cuda
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm
  • codeserver/ubi9-python-3.12/Dockerfile.cpu
🚧 Files skipped from review as they are similar to previous changes (11)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/build-args/rocm.conf
  • jupyter/tensorflow/ubi9-python-3.12/build-args/cuda.conf
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/minimal/ubi9-python-3.12/build-args/cuda.conf
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf
  • runtimes/rocm-tensorflow/ubi9-python-3.12/build-args/rocm.conf
  • jupyter/rocm/pytorch/ubi9-python-3.12/build-args/rocm.conf
  • jupyter/minimal/ubi9-python-3.12/build-args/rocm.conf
  • jupyter/pytorch/ubi9-python-3.12/build-args/cuda.conf
  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/build-args/cuda.conf
🧰 Additional context used
🧬 Code graph analysis (1)
tests/containers/base_image_test.py (1)
tests/containers/kubernetes_utils.py (1)
  • exec (282-320)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (46)
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: build (cuda-rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: build (rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (1)
tests/containers/base_image_test.py (1)

236-236: LGTM! Type annotation now correctly reflects the data structure.

The change from list[str] to list[list[str]] fixes an incorrect type annotation, as the data structure has always been a 2D list containing [path, perms, gid, uid] entries.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c65cb13 and 83a1695.

📒 Files selected for processing (7)
  • codeserver/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • rstudio/rhel9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • rstudio/rhel9-python-3.12/build-args/cpu.conf
🚧 Files skipped from review as they are similar to previous changes (5)
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf
  • codeserver/ubi9-python-3.12/build-args/cpu.conf
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (35)
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
  • GitHub Check: build (cuda-rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: build (rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (1)
runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: I'll attempt to verify the architecture support using alternative methods since skopeo is unavailable.

Verify c9s base image architecture support via registry or PR discussion.

CentOS Stream 9 (c9s) container images natively support x86_64, aarch64, ppc64le, and s390x architectures. However, the specific architecture support of the PR-built odh-base-image-cpu-py312-c9s image cannot be confirmed in this environment (skopeo and curl registry inspection are unavailable).

Since the PR comments mention an s390x compatibility error, verify that the image build pipeline includes all required architectures (s390x, ppc64le, amd64, arm64) by checking:

  • The image build configuration or Dockerfile
  • The PR comment thread for architecture-related failures
  • The Quay.io registry manifest for this specific tag to confirm which architectures are actually published

@@ -1 +1 @@
BASE_IMAGE=registry.access.redhat.com/ubi9/python-312:latest
quay.io/opendatahub/odh-base-image-cpu-py312-c9s:on-pr-b725e0c555d008b873acd02c53ff23492606617b
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

Replace PR-tagged image with a stable tag before merging.

The image reference uses a temporary PR build tag (on-pr-b725e0c555d008b873acd02c53ff23492606617b). PR-tagged images are typically ephemeral and will break builds once the artifacts are cleaned up or expire.

Since the PR description indicates this depends on PR #2688, ensure that PR is merged and a stable release tag (e.g., latest, versioned tag, or commit SHA from main) is used instead.

Verify the dependency PR status and plan for updating to a stable image tag:


🏁 Script executed:

#!/bin/bash
# Check if the dependency PR #2688 is merged
gh pr view 2688 --repo opendatahub-io/notebooks --json state,mergedAt,title

# Check if the image with PR tag exists and is accessible
skopeo inspect docker://quay.io/opendatahub/odh-base-image-cpu-py312-c9s:on-pr-b725e0c555d008b873acd02c53ff23492606617b 2>&1 | head -20

Length of output: 228


Let me check the codebase to understand the image tagging strategy and see if there are other stable tags being used elsewhere:


🏁 Script executed:

#!/bin/bash
# Search for other image references in build-args to understand tagging pattern
find runtimes -name "*.conf" -type f -exec grep -H "quay.io\|image" {} \; | head -30

# Look for any documentation or README about image versions
find runtimes -name "README*" -o -name "*.md" | xargs grep -l "image\|tag\|version" 2>/dev/null | head -10

Length of output: 945


Replace PR-tagged image with a stable version tag to match repository pattern.

The image uses a temporary PR build tag (on-pr-b725e0c555d008b873acd02c53ff23492606617b), which is inconsistent with other base images in the repository that use stable versioned tags (e.g., v12.6, v6.2, v6.3). PR-tagged images are ephemeral and break builds once artifacts expire.

Since dependency PR #2688 is now merged (2025-11-19), a stable version tag should be available for odh-base-image-cpu-py312-c9s. Update both affected files to use a versioned tag before merging:

  • runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf
🤖 Prompt for AI Agents
runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf around line 1: the
file references a PR-specific image tag which is ephemeral; replace the
PR-tagged image name with the repository's stable version tag for
odh-base-image-cpu-py312-c9s (the same versioning pattern used elsewhere, e.g.,
v12.6 or the merged dependency's released tag) and make the identical change in
runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf so both files reference
the stable versioned image tag.

@openshift-ci openshift-ci bot added size/l and removed size/l labels Nov 19, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 83a1695 and 739ead2.

📒 Files selected for processing (7)
  • codeserver/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • rstudio/rhel9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • rstudio/rhel9-python-3.12/build-args/cpu.conf
🚧 Files skipped from review as they are similar to previous changes (3)
  • runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (49)
  • GitHub Check: build (cuda-rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: build (rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (1)
jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: Verify that the PR-specific image tag will be replaced with a stable release tag before merge.

The BASE_IMAGE references a PR-specific tag (on-pr-b725e0c555d008b873acd02c53ff23492606617b) that is used consistently across 7 image configuration files. PR-specific tags are ephemeral and typically become unavailable after the PR is merged or closed, which could cause build failures.

Other base image references in the codebase consistently use stable release tags (e.g., v12.6, v12.8, v6.3), establishing a clear pattern that should be followed here as well.

Additionally, per the architecture comments in rstudio configuration files (documenting linux/arm64, linux/x86_64 support), confirm that this base image supports all required architectures for the project.

Files requiring updates:

  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf
  • codeserver/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf
  • rstudio/rhel9-python-3.12/build-args/cpu.conf
  • runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf

@openshift-ci openshift-ci bot added size/l and removed size/l labels Nov 19, 2025
…images instead of ubi9 (#3020)

This is to align ODH with AIPCC bases which use RHEL. Therefore, ODH should use CentOS Stream.
… (#3020)

```
+ dnf install -y perl mesa-libGL skopeo gcc gcc-c++ make openssl-devel autoconf automake libtool cmake python3-devel pybind11-devel openblas-devel unixODBC-devel
  Updating Subscription Management repositories.
  Red Hat Universal Base Image 9 (RPMs) - BaseOS  1.4 MB/s | 433 kB     00:00
  Red Hat Universal Base Image 9 (RPMs) - AppStre 6.4 MB/s | 2.2 MB     00:00
  Red Hat Universal Base Image 9 (RPMs) - CodeRea 683 kB/s | 174 kB     00:00
  Package mesa-libGL-24.2.8-3.el9_6.s390x is already installed.
  Package skopeo-2:1.20.0-1.el9.s390x is already installed.
  Package gcc-11.5.0-11.el9.s390x is already installed.
  Error:
     Problem: package mesa-dri-drivers-25.0.7-3.el9_7.s390x from ubi-9-appstream-rpms requires libLLVM.so.20.1()(64bit), but none of the providers can be installed
      - package mesa-dri-drivers-25.0.7-3.el9_7.s390x from ubi-9-appstream-rpms requires libLLVM.so.20.1(LLVM_20.1)(64bit), but none of the providers can be installed
      - cannot install both llvm-libs-20.1.8-3.el9.s390x from ubi-9-appstream-rpms and llvm-libs-19.1.7-2.el9.s390x from @System
      - package mesa-libGL-25.0.7-3.el9_7.s390x from ubi-9-appstream-rpms requires libgallium-25.0.7.so()(64bit), but none of the providers can be installed
...
```
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: LGTM! Past concerns about PR-specific tags addressed.

The base image reference now uses a stable latest tag instead of the PR-specific tag mentioned in previous reviews. The change is consistent with the migration to c9s base images.

Note: Consider using a versioned tag for reproducibility (same recommendation as in jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf).

runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1)

1-1: LGTM! Past concerns about PR-specific tags addressed.

The base image reference now uses a stable latest tag instead of the PR-specific tag mentioned in previous reviews. The change is consistent with the migration to c9s base images.

Note: Consider using a versioned tag for reproducibility (same recommendation as in other cpu.conf files).

🧹 Nitpick comments (2)
jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf (1)

1-2: Consider using a versioned tag instead of latest for reproducibility.

The change correctly migrates to the c9s base image. However, using the latest tag can lead to non-reproducible builds. Other base images in this PR use versioned tags (e.g., v12.8).

If a stable version tag is available, consider updating to use it:

 # Base Image   : c9s with Python 3.12
-BASE_IMAGE=quay.io/opendatahub/odh-base-image-cpu-py312-c9s:latest
+BASE_IMAGE=quay.io/opendatahub/odh-base-image-cpu-py312-c9s:v1.0
tests/containers/base_image_test.py (1)

246-250: Clarify the comment explaining why UID is ignored.

The functional change to ignore UID is correct for OpenShift environments where containers run with random UIDs. However, the comment explanation could be clearer.

The comment states "it's the gid that is nonrandom on openshift", but this phrasing is potentially confusing. On OpenShift:

  • UIDs are assigned randomly by the platform (which is why we ignore them)
  • GIDs are typically fixed to 0 (root group) to allow file access

Consider rewording for clarity:

-                    # ignore `:%u`, it does not matter what the uid is, it's the gid that is nonrandom on openshift
+                    # ignore UID - OpenShift assigns random UIDs at runtime, but GID is fixed (typically 0)
                     _, output = container.exec(["stat", "--format='%a:%g'", f"{item[0]}"])
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 739ead2 and 71d0525.

📒 Files selected for processing (35)
  • codeserver/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • codeserver/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/pytorch/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/rocm/pytorch/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • jupyter/tensorflow/ubi9-python-3.12/build-args/cuda.conf (1 hunks)
  • jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • rstudio/rhel9-python-3.12/Dockerfile.cpu (0 hunks)
  • rstudio/rhel9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • runtimes/datascience/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu (0 hunks)
  • runtimes/minimal/ubi9-python-3.12/build-args/cpu.conf (1 hunks)
  • runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm (0 hunks)
  • runtimes/rocm-tensorflow/ubi9-python-3.12/build-args/rocm.conf (1 hunks)
  • runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda (0 hunks)
  • tests/containers/base_image_test.py (2 hunks)
💤 Files with no reviewable changes (20)
  • runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu
  • runtimes/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
  • codeserver/ubi9-python-3.12/Dockerfile.cpu
  • runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu
  • runtimes/rocm-pytorch/ubi9-python-3.12/Dockerfile.rocm
  • runtimes/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.rocm
  • jupyter/pytorch+llmcompressor/ubi9-python-3.12/Dockerfile.cuda
  • rstudio/rhel9-python-3.12/Dockerfile.cuda
  • jupyter/rocm/pytorch/ubi9-python-3.12/Dockerfile.rocm
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu
  • rstudio/rhel9-python-3.12/Dockerfile.cpu
  • runtimes/rocm-tensorflow/ubi9-python-3.12/Dockerfile.rocm
  • jupyter/pytorch/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/rocm/tensorflow/ubi9-python-3.12/Dockerfile.rocm
  • runtimes/pytorch/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cuda
🚧 Files skipped from review as they are similar to previous changes (9)
  • jupyter/rocm/tensorflow/ubi9-python-3.12/build-args/rocm.conf
  • runtimes/rocm-tensorflow/ubi9-python-3.12/build-args/rocm.conf
  • jupyter/minimal/ubi9-python-3.12/build-args/cuda.conf
  • jupyter/trustyai/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/rocm/pytorch/ubi9-python-3.12/build-args/rocm.conf
  • jupyter/minimal/ubi9-python-3.12/build-args/rocm.conf
  • jupyter/datascience/ubi9-python-3.12/build-args/cpu.conf
  • codeserver/ubi9-python-3.12/build-args/cpu.conf
  • jupyter/pytorch/ubi9-python-3.12/build-args/cuda.conf
🧰 Additional context used
🧬 Code graph analysis (1)
tests/containers/base_image_test.py (1)
tests/containers/kubernetes_utils.py (1)
  • exec (282-320)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (22)
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-codeserver-datascience-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-tensorflow-rocm-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-pipeline-runtime-pytorch-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-minimal-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-pytorch-llmcompressor-cuda-py312-ubi9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cpu-py312-rhel9-on-pull-request
  • GitHub Check: Red Hat Konflux / odh-workbench-rstudio-minimal-cuda-py312-rhel9-on-pull-request
  • GitHub Check: build (cuda-rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
  • GitHub Check: build (rstudio-rhel9-python-3.12, 3.12, linux/amd64, true) / build
🔇 Additional comments (3)
jupyter/pytorch+llmcompressor/ubi9-python-3.12/build-args/cuda.conf (1)

1-1: LGTM! Base image migration to c9s.

The change is consistent with the PR objective to migrate from UBI9 to c9s base images. Image verification is requested in the previous file review.

tests/containers/base_image_test.py (1)

236-236: LGTM! Type annotation correctly reflects the data structure.

The updated type annotation list[list[str]] correctly represents the nested list structure used for directories_to_check.

jupyter/tensorflow/ubi9-python-3.12/build-args/cuda.conf (1)

1-1: I'll search for information about ODH c9s base images and any known issues with s390x compatibility mentioned in the PR.

Let me search for information about ODH base image infrastructure and whether c9s images support s390x.

Based on the verification results and web search findings, I can now generate the rewritten review comment:


Images cannot be verified accessible; clarify availability and architecture support expectations.

The migration to c9s base images cannot be verified at this time because the referenced quay.io/opendatahub images are not accessible. However, CentOS Stream 9 is compatible with 64-bit x86, 64-bit ARM, IBM Z (s390x Z14+), and IBM POWER architectures, so s390x support is not inherently limited by c9s.

Before merging, confirm:

  • Whether these ODH base images are intended to be private/internal or public
  • Whether images are pre-release or already published
  • If published, verify they exist and their supported architectures via quay.io repository inspection

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 19, 2025

@jiridanek: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/runtime-cuda-pt-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test runtime-cuda-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test runtime-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-cuda-jupyter-pt-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test notebook-cuda-jupyter-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-jupyter-tai-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test notebook-jupyter-tai-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-cuda-tf-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test runtime-cuda-tf-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-rocm-jupyter-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test notebook-rocm-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/rocm-runtime-pt-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test rocm-runtime-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-rocm-jupyter-pt-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test notebook-rocm-jupyter-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/codeserver-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test codeserver-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-jupyter-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test notebook-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-cuda-jupyter-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test notebook-cuda-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-ds-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test runtime-ds-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-cuda-jupyter-tf-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test notebook-cuda-jupyter-tf-ubi9-python-3-12-pr-image-mirror
ci/prow/images 71d0525 link true /test images
ci/prow/rocm-notebooks-py312-e2e-tests 71d0525 link true /test rocm-notebooks-py312-e2e-tests
ci/prow/notebooks-py312-ubi9-e2e-tests 71d0525 link true /test notebooks-py312-ubi9-e2e-tests
ci/prow/notebook-jupyter-ds-ubi9-python-3-12-pr-image-mirror 71d0525 link true /test notebook-jupyter-ds-ubi9-python-3-12-pr-image-mirror

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 20, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ide-developer
Once this PR has been reviewed and has the lgtm label, please assign andyatmiami for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jiridanek jiridanek merged commit 3d73bd7 into opendatahub-io:main Nov 20, 2025
42 of 78 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel size/l tide/merge-method-rebase Denotes a PR that should be rebased by tide when it merges.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants