Skip to content

Commit 7dedc1e

Browse files
authored
Python Genesis App Metrics Fix (#431)
*Description of changes:* Main build failed a few times due to metrics not being emitted from the Genesis sample app. Adding a custom metric emission to prevent flaky issue. By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
1 parent 19c187a commit 7dedc1e

File tree

2 files changed

+18
-1
lines changed

2 files changed

+18
-1
lines changed

.github/workflows/python-sample-app-s3-deploy.yml

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,3 +67,11 @@ jobs:
6767
working-directory: lambda-layer/sample-apps
6868
run: aws s3api put-object --bucket ${{ secrets.APP_SIGNALS_E2E_EC2_JAR }}-prod-${{ matrix.aws-region }} --body ./build/function.zip --key pyfunction.zip
6969

70+
- name: Build Gen AI Sample App Zip
71+
working-directory: sample-apps/python/genai_service
72+
run: zip -r python-gen-ai-sample-app.zip .
73+
74+
- name: Upload Gen AI Sample App to S3
75+
working-directory: sample-apps/python/genai_service
76+
run: aws s3api put-object --bucket ${{ secrets.APP_SIGNALS_E2E_EC2_JAR }}-prod-${{ matrix.aws-region }} --body ./python-gen-ai-sample-app.zip --key python-gen-ai-sample-app.zip
77+

sample-apps/python/genai_service/server.py

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,12 @@
66
from langchain_aws import ChatBedrock
77
from langchain.prompts import ChatPromptTemplate
88
from langchain.chains import LLMChain
9-
from opentelemetry import trace
9+
from opentelemetry import trace, metrics
1010
from opentelemetry.sdk.trace import TracerProvider
1111
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter
1212
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
1313
from openinference.instrumentation.langchain import LangChainInstrumentor
14+
import random
1415

1516
# Load environment variables
1617
load_dotenv()
@@ -92,7 +93,15 @@ async def chat(request: ChatRequest):
9293
"""
9394
Chat endpoint that processes a single user message through AWS Bedrock
9495
"""
96+
9597
try:
98+
# Emit OTel Metrics
99+
meter = metrics.get_meter("genesis-meter", "1.0.0")
100+
request_duration = meter.create_histogram(
101+
name="Genesis_TestMetrics", description="Genesis request duration", unit="s"
102+
)
103+
request_duration.record(0.1 + (0.5 * random.random()), {"method": "GET", "status": "200"})
104+
96105
# Process the input through the chain
97106
result = await chain.ainvoke({"input": request.message})
98107
return ChatResponse(response=result["text"])

0 commit comments

Comments
 (0)