You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Introduced an AI/LLM-powered analysis feature to automatically
diagnose failed pipeline runs. This helps developers by providing root
cause analysis and suggested fixes directly in pull requests, reducing
debugging time.
The feature is configured in the Repository CRD under `spec.settings.ai`.
Key capabilities include:
- Support for OpenAI and Google Gemini providers, with configurable
models per analysis role.
- Flexible analysis scenarios ("roles") with custom prompts and
conditional triggers via CEL expressions.
- Rich analysis context, including detailed commit information, PR
metadata, error messages, and container logs.
- Output of analysis results as comments on pull requests.
To support this, Git provider integrations were enhanced to fetch more
comprehensive commit details. A fake LLM server (`nonoai`) was also
added to enable robust and cost-effective E2E testing.
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Gemini <gemini@google.com>
Signed-off-by: Chmouel Boudjnah <chmouel@redhat.com>
0 commit comments