Skip to content

Commit 9b67aed

Browse files
Sharon Tanfacebook-github-bot
authored andcommitted
Address likely package mismatch issues (#1634)
Summary: Pull Request resolved: #1634 Previous CI runs were encountering issues: 1. https://github.com/.../runs/16867033035/job/47776474001 ``` E torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device E CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. E For debugging consider passing CUDA_LAUNCH_BLOCKING=1 E Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` - likely because cuda 12.1 was no longer supported by newest versions of pytorch installed - note, 12.9 also failed with `nvidia-container-cli: requirement error: unsatisfied condition: cuda>=12.9, please update your driver to a newer version, or use an earlier cuda container: unknown.` 2. https://github.com/pytorch/captum/actions/runs/16867033037/job/47775775171 `tests/attr/test_llm_attr_hf_compatibility.py:76: error: Argument 1 to "__call__" of "_Wrapped" has incompatible type "str"; expected "PreTrainedModel" [arg-type]` This is likely due to (1) mismatched pytorch versions vs the cuda version 12.1 specified in test-pip-gpu.yml, triggered by [new PyTorch release](https://github.com/pytorch/pytorch/releases) and (2) lack of full typing causing error thrown by `mypy`, triggered by loosely defined `mypy>=0.760` probably pulling [new MyPy release](https://pypi.org/project/mypy/#history) Reviewed By: styusuf Differential Revision: D80120024 fbshipit-source-id: d8837826678e143849546341db784410a67907cc
1 parent 784fc48 commit 9b67aed

File tree

2 files changed

+5
-3
lines changed

2 files changed

+5
-3
lines changed

.github/workflows/test-pip-gpu.yml

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ jobs:
1212
tests:
1313
strategy:
1414
matrix:
15-
cuda_arch_version: ["12.1"]
15+
cuda_arch_version: ["12.6"]
1616
fail-fast: false
1717
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main
1818
with:
@@ -22,7 +22,9 @@ jobs:
2222
gpu-arch-version: ${{ matrix.cuda_arch_version }}
2323
script: |
2424
python3 -m pip install --upgrade pip --progress-bar off
25-
python3 -m pip install -e .[dev] --progress-bar off
25+
python3 -m pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126 --progress-bar off
26+
python3 -m pip install -e .[dev] --extra-index-url https://download.pytorch.org/whl/cu126 --progress-bar off
27+
python3 -m pip install transformers --progress-bar off
2628
2729
# Build package
2830
python3 -m pip install build --progress-bar off

tests/attr/test_llm_attr_hf_compatibility.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ def test_llm_attr_hf_compatibility(
7373
"hf-internal-testing/tiny-random-LlamaForCausalLM"
7474
)
7575

76-
llm.to(self.device)
76+
llm.to(self.device) # type: ignore[arg-type]
7777
llm.eval()
7878
llm_attr = LLMAttribution(AttrClass(llm), tokenizer)
7979

0 commit comments

Comments
 (0)