-
Notifications
You must be signed in to change notification settings - Fork 56
fix(examples): allow all examples to run with same LLM_API_KEY #1065
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Allow hello world example to use the same LLM_API_KEY and LLM_BASE_URL environment variables as other examples, enabling consistent configuration across all examples. Changes: - Use os.getenv() for model with default 'openhands/claude-sonnet-4-5-20250929' - Add base_url parameter from LLM_BASE_URL environment variable - Add usage_id for better tracking - Add comments about API key configuration
dfa6594 to
7f3ad3c
Compare
rbren
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will give feedback in a min :)
- Update all example files to use anthropic/claude-sonnet-4-5-20250929 as the default model - Remove comments about obtaining credits from hello world example - Model and API key can be overridden via LLM_MODEL and LLM_API_KEY environment variables
|
Based on feedback via slack, added a commit that:
This commit DOES NOT address with the few cases where a model besides |
…LM routing example Ensures both primary and secondary LLMs use the same provider/key by default. Co-authored-by: openhands <openhands@all-hands.dev>
…M_MODEL Restores in-line env read for API key with model still read from env with fallback. Co-authored-by: openhands <openhands@all-hands.dev>
🔄 Running Examples with
|
| Example | Status | Duration | Cost |
|---|---|---|---|
| 01_standalone_sdk/02_custom_tools.py | ✅ PASS | 26s | $0.03 |
| 01_standalone_sdk/03_activate_skill.py | ✅ PASS | 11s | $0.01 |
| 01_standalone_sdk/05_use_llm_registry.py | ✅ PASS | 8s | $0.01 |
| 01_standalone_sdk/07_mcp_integration.py | ✅ PASS | 43s | $0.02 |
| 01_standalone_sdk/09_pause_example.py | ✅ PASS | 13s | $0.01 |
| 01_standalone_sdk/10_persistence.py | ✅ PASS | 37s | $0.03 |
| 01_standalone_sdk/11_async.py | ✅ PASS | 32s | $0.03 |
| 01_standalone_sdk/12_custom_secrets.py | ✅ PASS | 14s | $0.01 |
| 01_standalone_sdk/13_get_llm_metrics.py | ✅ PASS | 31s | $0.02 |
| 01_standalone_sdk/14_context_condenser.py | ✅ PASS | 198s | $0.40 |
| 01_standalone_sdk/17_image_input.py | ✅ PASS | 19s | $0.02 |
| 01_standalone_sdk/18_send_message_while_processing.py | ✅ PASS | 20s | $0.01 |
| 01_standalone_sdk/19_llm_routing.py | ✅ PASS | 16s | $0.01 |
| 01_standalone_sdk/20_stuck_detector.py | ✅ PASS | 16s | $0.01 |
| 01_standalone_sdk/21_generate_extraneous_conversation_costs.py | ❌ FAIL (exit: 1) | 10s | $0.00 |
| 01_standalone_sdk/22_anthropic_thinking.py | ✅ PASS | 14s | $0.01 |
| 01_standalone_sdk/23_responses_reasoning.py | ✅ PASS | 36s | $0.01 |
| 01_standalone_sdk/24_planning_agent_workflow.py | ✅ PASS | 312s | $0.42 |
| 01_standalone_sdk/25_agent_delegation.py | ❌ FAIL (exit: 1) | 45s | $0.00 |
| 01_standalone_sdk/26_custom_visualizer.py | ✅ PASS | 19s | $0.00N/A |
| 02_remote_agent_server/01_convo_with_local_agent_server.py | ✅ PASS | 55s | $0.04 |
| 02_remote_agent_server/02_convo_with_docker_sandboxed_server.py | ✅ PASS | 116s | $0.04 |
| 02_remote_agent_server/03_browser_use_with_docker_sandboxed_server.py | ✅ PASS | 63s | $0.04 |
| 02_remote_agent_server/04_convo_with_api_sandboxed_server.py | ❌ FAIL (exit: 1) | 3s | $0.00 |
❌ Some tests failed
Total: 24 | Passed: 21 | Failed: 3
…osts example Ensures CI uses the same provider/key by reading model from LLM_MODEL for both LLMs. Co-authored-by: openhands <openhands@all-hands.dev>
xingyaoww
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Problem
The hello world example uses a hardcoded Anthropic model name instead of reading from environment variables like other examples. This creates an inconsistent experience:
LLM_API_KEYdepending on which example they want to runSolution
This change makes hello world consistent with other examples by:
LLM_MODELenvironment variable with defaultopenhands/claude-sonnet-4-5-20250929Now users can set up their environment once and run all examples with the same configuration, making the SDK more approachable for first-time users and smoother for demos.
Agent Server images for this PR
• GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server
Variants & Base Images
eclipse-temurin:17-jdknikolaik/python-nodejs:python3.12-nodejs22golang:1.21-bookwormPull (multi-arch manifest)
# Each variant is a multi-arch manifest supporting both amd64 and arm64 docker pull ghcr.io/openhands/agent-server:7c9df63-pythonRun
All tags pushed for this build
About Multi-Architecture Support
7c9df63-python) is a multi-arch manifest supporting both amd64 and arm647c9df63-python-amd64) are also available if needed