Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Commit c9ed167

Browse files
[pre-commit.ci] pre-commit autoupdate (#1646)
Signed-off-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent 86087dc commit c9ed167

File tree

21 files changed

+73
-72
lines changed

21 files changed

+73
-72
lines changed

.github/checkgroup.yml

Lines changed: 49 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -65,54 +65,54 @@ subprojects:
6565
- "engine-unit-test-PR-test"
6666
- "Genreate-Engine-Report"
6767

68-
- id: "Windows Binary Test"
69-
paths:
70-
- ".github/workflows/windows-test.yml"
71-
- "requirements.txt"
72-
- "setup.py"
73-
- "intel_extension_for_transformers/transformers/runtime/**"
74-
- "intel_extension_for_transformers/transformers/llm/operator/**"
75-
- "!intel_extension_for_transformers/transformers/runtime/third_party/**"
76-
- "!intel_extension_for_transformers/transformers/runtime/docs/**"
77-
- "!intel_extension_for_transformers/transformers/runtime/test/**"
78-
checks:
79-
- "Windows-Binary-Test"
68+
# - id: "Windows Binary Test"
69+
# paths:
70+
# - ".github/workflows/windows-test.yml"
71+
# - "requirements.txt"
72+
# - "setup.py"
73+
# - "intel_extension_for_transformers/transformers/runtime/**"
74+
# - "intel_extension_for_transformers/transformers/llm/operator/**"
75+
# - "!intel_extension_for_transformers/transformers/runtime/third_party/**"
76+
# - "!intel_extension_for_transformers/transformers/runtime/docs/**"
77+
# - "!intel_extension_for_transformers/transformers/runtime/test/**"
78+
# checks:
79+
# - "Windows-Binary-Test"
8080

81-
- id: "LLM Model Test workflow"
82-
paths:
83-
- ".github/workflows/llm-test.yml"
84-
- ".github/workflows/script/models/run_llm.sh"
85-
- "intel_extension_for_transformers/transformers/runtime/**"
86-
- "!intel_extension_for_transformers/transformers/runtime/kernels/**"
87-
- "!intel_extension_for_transformers/transformers/runtime/test/**"
88-
- "!intel_extension_for_transformers/transformers/runtime/third_party/**"
89-
- "!intel_extension_for_transformers/transformers/runtime/docs/**"
90-
checks:
91-
- "LLM-Workflow (gpt-j-6b, engine, latency, bf16,int8,fp8)"
92-
- "Generate-LLM-Report"
81+
# - id: "LLM Model Test workflow"
82+
# paths:
83+
# - ".github/workflows/llm-test.yml"
84+
# - ".github/workflows/script/models/run_llm.sh"
85+
# - "intel_extension_for_transformers/transformers/runtime/**"
86+
# - "!intel_extension_for_transformers/transformers/runtime/kernels/**"
87+
# - "!intel_extension_for_transformers/transformers/runtime/test/**"
88+
# - "!intel_extension_for_transformers/transformers/runtime/third_party/**"
89+
# - "!intel_extension_for_transformers/transformers/runtime/docs/**"
90+
# checks:
91+
# - "LLM-Workflow (gpt-j-6b, engine, latency, bf16,int8,fp8)"
92+
# - "Generate-LLM-Report"
9393

94-
- id: "Chat Bot Test workflow"
95-
paths:
96-
- ".github/workflows/chatbot-test.yml"
97-
- ".github/workflows/chatbot-inference-llama-2-7b-chat-hf.yml"
98-
- ".github/workflows/chatbot-inference-mpt-7b-chat.yml"
99-
- ".github/workflows/chatbot-finetune-mpt-7b-chat.yml"
100-
- ".github/workflows/chatbot-inference-llama-2-7b-chat-hf-hpu.yml"
101-
- ".github/workflows/chatbot-inference-mpt-7b-chat-hpu.yml"
102-
- ".github/workflows/chatbot-finetune-mpt-7b-chat-hpu.yml"
103-
- ".github/workflows/script/chatbot/**"
104-
- ".github/workflows/sample_data/**"
105-
- "intel_extension_for_transformers/neural_chat/**"
106-
- "intel_extension_for_transformers/transformers/llm/finetuning/**"
107-
- "intel_extension_for_transformers/transformers/llm/quantization/**"
108-
- "intel_extension_for_transformers/transformers/**"
109-
- "workflows/chatbot/inference/**"
110-
- "workflows/chatbot/fine_tuning/**"
111-
- "!intel_extension_for_transformers/neural_chat/docs/**"
112-
- "!intel_extension_for_transformers/neural_chat/tests/ci/**"
113-
- "!intel_extension_for_transformers/neural_chat/examples/**"
114-
- "!intel_extension_for_transformers/neural_chat/assets/**"
115-
- "!intel_extension_for_transformers/neural_chat/README.md"
116-
checks:
117-
- "call-inference-llama-2-7b-chat-hf / inference test"
118-
- "call-inference-mpt-7b-chat / inference test"
94+
# - id: "Chat Bot Test workflow"
95+
# paths:
96+
# - ".github/workflows/chatbot-test.yml"
97+
# - ".github/workflows/chatbot-inference-llama-2-7b-chat-hf.yml"
98+
# - ".github/workflows/chatbot-inference-mpt-7b-chat.yml"
99+
# - ".github/workflows/chatbot-finetune-mpt-7b-chat.yml"
100+
# - ".github/workflows/chatbot-inference-llama-2-7b-chat-hf-hpu.yml"
101+
# - ".github/workflows/chatbot-inference-mpt-7b-chat-hpu.yml"
102+
# - ".github/workflows/chatbot-finetune-mpt-7b-chat-hpu.yml"
103+
# - ".github/workflows/script/chatbot/**"
104+
# - ".github/workflows/sample_data/**"
105+
# - "intel_extension_for_transformers/neural_chat/**"
106+
# - "intel_extension_for_transformers/transformers/llm/finetuning/**"
107+
# - "intel_extension_for_transformers/transformers/llm/quantization/**"
108+
# - "intel_extension_for_transformers/transformers/**"
109+
# - "workflows/chatbot/inference/**"
110+
# - "workflows/chatbot/fine_tuning/**"
111+
# - "!intel_extension_for_transformers/neural_chat/docs/**"
112+
# - "!intel_extension_for_transformers/neural_chat/tests/ci/**"
113+
# - "!intel_extension_for_transformers/neural_chat/examples/**"
114+
# - "!intel_extension_for_transformers/neural_chat/assets/**"
115+
# - "!intel_extension_for_transformers/neural_chat/README.md"
116+
# checks:
117+
# - "call-inference-llama-2-7b-chat-hf / inference test"
118+
# - "call-inference-mpt-7b-chat / inference test"

.github/workflows/script/formatScan/nlp_dict.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
alse
22
ans
3+
assertIn
34
bu
45
charactor
56
daa

.pre-commit-config.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ ci:
44

55
repos:
66
- repo: https://github.com/pre-commit/pre-commit-hooks
7-
rev: v4.5.0
7+
rev: v4.6.0
88
hooks:
99
- id: debug-statements
1010
- id: mixed-line-ending
@@ -44,7 +44,7 @@ repos:
4444
)$
4545
4646
- repo: https://github.com/codespell-project/codespell
47-
rev: v2.2.6
47+
rev: v2.3.0
4848
hooks:
4949
- id: codespell
5050
args: [-w, --ignore-words=.github/workflows/script/formatScan/nlp_dict.txt]

docs/code_of_conduct.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ In the interest of fostering an open and welcoming environment, we as
1414
contributors and maintainers pledge to making participation in our project and
1515
our community a harassment-free experience for everyone, regardless of age, body
1616
size, disability, ethnicity, sex characteristics, gender identity and expression,
17-
level of experience, education, socio-economic status, nationality, personal
17+
level of experience, education, socioeconomic status, nationality, personal
1818
appearance, race, religion, or sexual identity and orientation.
1919

2020
## Our Standards

examples/huggingface/pytorch/question-answering/deployment/squad/MLperf_example/csrc/bert_qsl.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ namespace qsl {
7474
}
7575
}
7676

77-
// Splice them togather
77+
// Splice them together
7878
Queue_t result;
7979
for (auto& q : Buckets)
8080
result.splice(result.end(), std::move(q));

examples/huggingface/pytorch/question-answering/deployment/squad/MLperf_example/utils_qa.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ def postprocess_qa_predictions_with_beam_search(
295295

296296
assert len(predictions[0]) == len(
297297
features
298-
), f"Got {len(predictions[0])} predicitions and {len(features)} features."
298+
), f"Got {len(predictions[0])} predictions and {len(features)} features."
299299

300300
# Build a map example to its corresponding features.
301301
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}

examples/huggingface/pytorch/text-to-image/deployment/stable_diffusion/ITREX_StableDiffusionInstructPix2PixPipeline.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1515
# See the License for the specific language governing permissions and
1616
# limitations under the License.
17-
"""Pipeline Modificaiton based from the diffusers 0.12.1 StableDiffusionInstructPix2PixPipeline"""
17+
"""Pipeline Modification based from the diffusers 0.12.1 StableDiffusionInstructPix2PixPipeline"""
1818

1919
import inspect
2020
from typing import Callable, List, Optional, Union

examples/huggingface/pytorch/text-to-image/deployment/stable_diffusion/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ python run_executor.py --ir_path=./qat_int8_ir --mode=latency --input_model=runw
137137
## 3. Accuracy
138138
Frechet Inception Distance(FID) metric is used to evaluate the accuracy. This case we check the FID scores between the pytorch image and engine image.
139139

140-
By setting --accuracy to check FID socre.
140+
By setting --accuracy to check FID score.
141141
Python API command as follows:
142142
```python
143143
# FP32 IR

examples/huggingface/pytorch/text-to-image/deployment/stable_diffusion/diffusion_utils_img2img.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1515
# See the License for the specific language governing permissions and
1616
# limitations under the License.
17-
"""Pipeline Modificaiton based from the diffusers 0.12.1 StableDiffusionImg2ImgPipeline"""
17+
"""Pipeline Modification based from the diffusers 0.12.1 StableDiffusionImg2ImgPipeline"""
1818

1919
import inspect
2020
from typing import Callable, List, Optional, Union

intel_extension_for_transformers/neural_chat/docs/notebooks/build_chatbot_on_xpu.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@
7777
"cell_type": "markdown",
7878
"metadata": {},
7979
"source": [
80-
"Install requirements that have denpendency on stock pytorch"
80+
"Install requirements that have dependency on stock pytorch"
8181
]
8282
},
8383
{

0 commit comments

Comments
 (0)