Skip to content

Commit 269d2a7

Browse files
chmouelclaudeGemini
committed
feat: Introduce AI/LLM-powered pipeline analysis
Introduced an AI/LLM-powered analysis feature to automatically diagnose failed pipeline runs. This helps developers by providing root cause analysis and suggested fixes directly in pull requests, reducing debugging time. The feature is configured in the Repository CRD under `spec.settings.ai`. Key capabilities include: - Support for OpenAI and Google Gemini providers, with configurable models per analysis role. - Flexible analysis scenarios ("roles") with custom prompts and conditional triggers via CEL expressions. - Rich analysis context, including detailed commit information, PR metadata, error messages, and container logs. - Output of analysis results as comments on pull requests. To support this, Git provider integrations were enhanced to fetch more comprehensive commit details. A fake LLM server (`nonoai`) was also added to enable robust and cost-effective E2E testing. Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Gemini <gemini@google.com> Signed-off-by: Chmouel Boudjnah <chmouel@redhat.com>
1 parent 860c901 commit 269d2a7

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

46 files changed

+7293
-37
lines changed

Makefile

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,7 @@ HUGO_BIN := $(TMPDIR)/hugo/hugo
1919
PY_FILES := $(shell find . -type f -regex ".*\.py" -not -regex ".*\.venv/.*" -print)
2020
SH_FILES := $(shell find hack/ -type f -regex ".*\.sh" -not -regex ".*\.venv/.*" -print)
2121
YAML_FILES := $(shell find . -not -regex '^./vendor/.*' -type f -regex ".*y[a]ml" -print)
22-
MD_FILES := $(shell find . -type f -regex ".*md" -not -regex '^./vendor/.*' -not -regex ".*\.venv/.*" -not -regex '^./.vale/.*' -not -regex "^./docs/themes/.*" -not -regex "^./.git/.*" -print)
23-
22+
MD_FILES := $(shell find . -type f -regex ".*md" -not -regex '^./tmp/.*' -not -regex '^./vendor/.*' -not -regex ".*\.venv/.*" -not -regex '^./.vale/.*' -not -regex "^./docs/themes/.*" -not -regex "^./.git/.*" -print)
2423

2524
ifeq ($(PAC_VERSION),)
2625
PAC_VERSION="$(shell git describe --tags --exact-match 2>/dev/null || echo nightly-`date +'%Y%m%d'`-`git rev-parse --short HEAD`)"

config/300-repositories.yaml

Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -334,6 +334,118 @@ spec:
334334
Settings contains the configuration settings for the repository, including
335335
authorization policies, provider-specific configuration, and provenance settings.
336336
properties:
337+
ai:
338+
description: AIAnalysis contains AI/LLM analysis configuration for automated CI/CD pipeline analysis.
339+
properties:
340+
api_url:
341+
description: |-
342+
APIURL is an optional base URL to override the default API endpoint of the LLM provider.
343+
If not specified, provider-specific defaults are used:
344+
- OpenAI: https://api.openai.com/v1
345+
- Gemini: https://generativelanguage.googleapis.com/v1beta
346+
Use this to configure self-hosted LLM instances, proxy services, or alternative endpoints.
347+
type: string
348+
enabled:
349+
description: Enabled controls whether AI analysis is active for this repository
350+
type: boolean
351+
max_tokens:
352+
description: 'MaxTokens limits the response length from the LLM (default: 1000)'
353+
maximum: 4000
354+
minimum: 1
355+
type: integer
356+
provider:
357+
description: Provider specifies which LLM provider to use for analysis
358+
enum:
359+
- openai
360+
- gemini
361+
type: string
362+
roles:
363+
description: Roles defines different analysis scenarios and their configurations
364+
items:
365+
description: AnalysisRole defines a specific analysis scenario with its prompt, conditions, and output configuration.
366+
properties:
367+
context_items:
368+
description: ContextItems defines what context data to include in the analysis
369+
properties:
370+
commit_content:
371+
description: CommitContent includes commit message and diff information
372+
type: boolean
373+
container_logs:
374+
description: ContainerLogs configures inclusion of container/task logs
375+
properties:
376+
enabled:
377+
description: Enabled controls whether container logs are included
378+
type: boolean
379+
max_lines:
380+
description: 'MaxLines limits the number of log lines to include (default: 50)'
381+
maximum: 1000
382+
minimum: 1
383+
type: integer
384+
required:
385+
- enabled
386+
type: object
387+
error_content:
388+
description: ErrorContent includes error messages and failure summaries
389+
type: boolean
390+
pr_content:
391+
description: PRContent includes pull request title, description, and metadata
392+
type: boolean
393+
type: object
394+
model:
395+
description: |-
396+
Model specifies which LLM model to use for this role (optional).
397+
You can specify any model supported by your provider.
398+
If not specified, provider-specific defaults are used:
399+
- OpenAI: gpt-5-mini
400+
- Gemini: gemini-2.5-flash-lite
401+
type: string
402+
name:
403+
description: Name is a unique identifier for this analysis role
404+
type: string
405+
on_cel:
406+
description: OnCEL is a CEL expression that determines when this role should be triggered
407+
type: string
408+
output:
409+
default: pr-comment
410+
description: 'Output specifies where the analysis results should be sent (default: pr-comment)'
411+
enum:
412+
- pr-comment
413+
type: string
414+
prompt:
415+
description: Prompt is the base prompt template sent to the LLM for analysis
416+
type: string
417+
required:
418+
- name
419+
- prompt
420+
type: object
421+
minItems: 1
422+
type: array
423+
x-kubernetes-list-map-keys:
424+
- name
425+
x-kubernetes-list-type: map
426+
secret_ref:
427+
description: TokenSecretRef references the Kubernetes secret containing the LLM provider API token
428+
properties:
429+
key:
430+
description: Key in the secret
431+
type: string
432+
name:
433+
description: Name of the secret
434+
type: string
435+
required:
436+
- name
437+
type: object
438+
timeout_seconds:
439+
description: 'TimeoutSeconds sets the maximum time to wait for LLM analysis (default: 30)'
440+
maximum: 300
441+
minimum: 1
442+
type: integer
443+
required:
444+
- enabled
445+
- provider
446+
- roles
447+
- secret_ref
448+
type: object
337449
github:
338450
properties:
339451
comment_strategy:
@@ -407,6 +519,7 @@ spec:
407519
type: string
408520
type: object
409521
required:
522+
- metadata
410523
- spec
411524
type: object
412525
scope: Namespaced

docs/content/docs/guide/incoming_webhook.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ spec:
130130
params:
131131
- prod_env
132132
type: webhook-url
133-
133+
134134
# Feature branches - checked second
135135
- targets:
136136
- "feature/*"
@@ -140,7 +140,7 @@ spec:
140140
params:
141141
- dev_env
142142
type: webhook-url
143-
143+
144144
# Catch-all - checked last
145145
- targets:
146146
- "*" # Matches any branch not caught above

0 commit comments

Comments
 (0)