Skip to content

Commit ef27ecd

Browse files
authored
update vars.env to add llm and base_url params, remove datetime.timedelta from tool calls (#336)
1 parent b0dbaf9 commit ef27ecd

File tree

6 files changed

+46
-21
lines changed

6 files changed

+46
-21
lines changed

industries/healthcare/agentic-healthcare-front-desk/README.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -41,15 +41,14 @@ The agentic tool calling capability in each of the customer care assistants is p
4141

4242
## Prerequisites
4343
### Hardware
44-
There are no local GPU requirements for running any application in this repo. The LLMs utilized in LangGraph in this repo are by default set to calling NVIDIA AI Endpoints, as seen in the directory [`graph_definitions/`](./graph_definitions/), and require a valid NVIDIA API KEY.
44+
There are no local GPU requirements for running any application in this repo. The LLMs utilized in LangGraph in this repo are by default set to calling NVIDIA AI Endpoints since `BASE_URL` is set to the default value of `"https://integrate.api.nvidia.com/v1"` in [vars.env](./vars.env), and require a valid NVIDIA API KEY. As seen in the [graph definitions](./graph_definitions/):
4545
```python
4646
from langchain_nvidia_ai_endpoints import ChatNVIDIA
47-
llm_model = "meta/llama-3.1-70b-instruct"
48-
assistant_llm = ChatNVIDIA(model=llm_model)
47+
assistant_llm = ChatNVIDIA(model=llm_model, ...)
4948
```
50-
You can experiment with other LLMs available on build.nvidia.com by changing the `model` param for `ChatNVIDIA` in the Python files in the directory [`graph_definitions/`](./graph_definitions/).
49+
You can experiment with other LLMs available on build.nvidia.com by changing the `LLM_MODEL` values in [vars.env](./vars.env), for passing into `ChatNVIDIA` in the Python files in the directory [`graph_definitions/`](./graph_definitions/).
5150

52-
If instead of calling NVIDIA AI Endpoints with an API key, you would like to host your own LLM NIM instance, please refer to the [Docker tab of the LLM NIM](https://build.nvidia.com/meta/llama-3_1-70b-instruct?snippet_tab=Docker) on how to host, and add a [`base_url` parameter](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/#working-with-nvidia-nims) to point to your own instance when specifying `ChatNVIDIA` in the Python files in the directory [`graph_definitions/`](./graph_definitions/). For the hardware configuration of self hosting the LLM, please refer to the [documentation for LLM support matrix](https://docs.nvidia.com/nim/large-language-models/latest/support-matrix.html).
51+
If instead of calling NVIDIA AI Endpoints with an API key, you would like to host your own LLM NIM instance, please refer to the [Docker tab of the LLM NIM](https://build.nvidia.com/meta/llama-3_1-70b-instruct?snippet_tab=Docker) on how to host, and changed the `BASE_URL` parameter in [vars.env](./vars.env) to [point to your own instance](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/#working-with-nvidia-nims) when specifying `ChatNVIDIA` in the Python files in the directory [`graph_definitions/`](./graph_definitions/). For the hardware configuration of self hosting the LLM, please refer to the [documentation for LLM support matrix](https://docs.nvidia.com/nim/large-language-models/latest/support-matrix.html).
5352

5453
### NVIDIA API KEY
5554
You will need an NVIDIA API KEY to call NVIDIA AI Endpoints. You can use different model API endpoints with the same API key, so even if you change the LLM specification in `ChatNVIDIA(model=llm_model)` you can still use the same API KEY.

industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph.py

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -49,8 +49,7 @@
4949
#################
5050
patient_id = '14867dba-fb11-4df3-9829-8e8e081b39e6' # test patient id from looking through https://launch.smarthealthit.org/
5151
save_graph_to_png = True
52-
main_llm_model = "meta/llama-3.1-70b-instruct"
53-
specialized_llm_model = "meta/llama-3.1-70b-instruct"
52+
5453
env_var_file = "vars.env"
5554

5655
local_file_constant = "sample_db/test_db.sqlite"
@@ -73,9 +72,19 @@
7372
RIVA_ASR_FUNCTION_ID = os.getenv("RIVA_ASR_FUNCTION_ID", None)
7473
RIVA_TTS_FUNCTION_ID = os.getenv("RIVA_TTS_FUNCTION_ID", None)
7574

75+
76+
assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!"
77+
main_llm_model = os.getenv("LLM_MODEL", None)
78+
79+
assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!"
80+
specialized_llm_model = os.getenv("LLM_MODEL", None)
81+
82+
assert os.environ['BASE_URL'] is not None, "Make sure you have your BASE_URL exported as a environment variable!"
83+
base_url = os.getenv("BASE_URL", None)
84+
7685
### define which llm to use
77-
main_assistant_llm = ChatNVIDIA(model=main_llm_model)#, base_url=base_url
78-
specialized_assistant_llm = ChatNVIDIA(model=specialized_llm_model)#, base_url=base_url
86+
main_assistant_llm = ChatNVIDIA(model=main_llm_model, base_url=base_url)
87+
specialized_assistant_llm = ChatNVIDIA(model=specialized_llm_model, base_url=base_url)
7988

8089
def update_dialog_stack(left: list[str], right: Optional[str]) -> list[str]:
8190
"""Push or pop the state."""
@@ -247,7 +256,7 @@ def print_gathered_patient_info(
247256
patient_dob: datetime.date,
248257
allergies_medication: List[str],
249258
current_symptoms: str,
250-
current_symptoms_duration: datetime.timedelta,
259+
current_symptoms_duration: str,
251260
pharmacy_location: str
252261
):
253262
"""This function prints out and transmits the gathered information for each patient intake field:
@@ -373,7 +382,7 @@ class ToPatientIntakeAssistant(BaseModel):
373382
patient_dob: datetime.date = Field(description="The patient's date of birth.")
374383
allergies_medication: List[str] = Field(description="A list of allergies in medication for the patient.")
375384
current_symptoms: str = Field(description="A description of the current symptoms for the patient.")
376-
current_symptoms_duration: datetime.timedelta = Field(description="The time duration of current symptoms.")
385+
current_symptoms_duration: str = Field(description="The time duration of current symptoms.")
377386
pharmacy_location: str = Field(description="The patient's pharmacy location.")
378387
request: str = Field(
379388
description="Any necessary information the patient intake assistant should clarify before proceeding."

industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_appointment_making_only.py

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,6 @@
4242
patient_id = '14867dba-fb11-4df3-9829-8e8e081b39e6' # test patient id from looking through https://launch.smarthealthit.org/
4343
save_graph_to_png = True
4444

45-
llm_model = "meta/llama-3.1-70b-instruct"
4645
env_var_file = "vars.env"
4746

4847
local_file_constant = "sample_db/test_db.sqlite"
@@ -63,8 +62,14 @@
6362
RIVA_ASR_FUNCTION_ID = os.getenv("RIVA_ASR_FUNCTION_ID", None)
6463
RIVA_TTS_FUNCTION_ID = os.getenv("RIVA_TTS_FUNCTION_ID", None)
6564

65+
assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!"
66+
llm_model = os.getenv("LLM_MODEL", None)
67+
68+
assert os.environ['BASE_URL'] is not None, "Make sure you have your BASE_URL exported as a environment variable!"
69+
base_url = os.getenv("BASE_URL", None)
70+
6671
### define which llm to use
67-
assistant_llm = ChatNVIDIA(model=llm_model) # base_url=base_url)
72+
assistant_llm = ChatNVIDIA(model=llm_model, base_url=base_url)
6873

6974
########################
7075
### Define the tools ###

industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_medication_lookup_only.py

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,6 @@
4949
#################
5050
patient_id = '14867dba-fb11-4df3-9829-8e8e081b39e6' # test patient id from looking through https://launch.smarthealthit.org/
5151
save_graph_to_png = True
52-
llm_model = "meta/llama-3.1-70b-instruct"
5352
env_var_file = "vars.env"
5453

5554

@@ -62,14 +61,20 @@
6261

6362
assert os.environ['NVIDIA_API_KEY'] is not None, "Make sure you have your NVIDIA_API_KEY exported as a environment variable!"
6463
assert os.environ['TAVILY_API_KEY'] is not None, "Make sure you have your TAVILY_API_KEY exported as a environment variable!"
65-
6664
NVIDIA_API_KEY=os.getenv("NVIDIA_API_KEY", None)
6765
RIVA_API_URI = os.getenv("RIVA_API_URI", None)
66+
6867
RIVA_ASR_FUNCTION_ID = os.getenv("RIVA_ASR_FUNCTION_ID", None)
6968
RIVA_TTS_FUNCTION_ID = os.getenv("RIVA_TTS_FUNCTION_ID", None)
7069

70+
assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!"
71+
llm_model = os.getenv("LLM_MODEL", None)
72+
73+
assert os.environ['BASE_URL'] is not None, "Make sure you have your BASE_URL exported as a environment variable!"
74+
base_url = os.getenv("BASE_URL", None)
75+
7176
### define which llm to use
72-
assistant_llm = ChatNVIDIA(model=llm_model)#, base_url=base_url
77+
assistant_llm = ChatNVIDIA(model=llm_model, base_url=base_url)
7378

7479
########################
7580
### Define the tools ###

industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_patient_intake_only.py

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,6 @@
3737
patient_id = '14867dba-fb11-4df3-9829-8e8e081b39e6' # test patient id from looking through https://launch.smarthealthit.org/
3838
save_graph_to_png = True
3939

40-
llm_model = "meta/llama-3.1-70b-instruct"
4140
env_var_file = "vars.env"
4241

4342

@@ -48,14 +47,20 @@
4847
print("Your NVIDIA_API_KEY is set to: ", os.environ['NVIDIA_API_KEY'])
4948

5049
assert os.environ['NVIDIA_API_KEY'] is not None, "Make sure you have your NVIDIA_API_KEY exported as a environment variable!"
51-
5250
NVIDIA_API_KEY=os.getenv("NVIDIA_API_KEY", None)
51+
5352
RIVA_API_URI = os.getenv("RIVA_API_URI", None)
5453
RIVA_ASR_FUNCTION_ID = os.getenv("RIVA_ASR_FUNCTION_ID", None)
5554
RIVA_TTS_FUNCTION_ID = os.getenv("RIVA_TTS_FUNCTION_ID", None)
5655

56+
assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!"
57+
llm_model = os.getenv("LLM_MODEL", None)
58+
59+
assert os.environ['BASE_URL'] is not None, "Make sure you have your BASE_URL exported as a environment variable!"
60+
base_url = os.getenv("BASE_URL", None)
61+
5762
### define which llm to use
58-
assistant_llm = ChatNVIDIA(model=llm_model) # base_url=base_url
63+
assistant_llm = ChatNVIDIA(model=llm_model, base_url=base_url)
5964

6065
########################
6166
### Define the tools ###
@@ -73,7 +78,7 @@ def print_gathered_patient_info(
7378
patient_dob: datetime.date,
7479
allergies_medication: List[str],
7580
current_symptoms: str,
76-
current_symptoms_duration: datetime.timedelta,
81+
current_symptoms_duration: str,
7782
pharmacy_location: str
7883
):
7984
"""This function prints out and transmits the gathered information for each patient intake field:

industries/healthcare/agentic-healthcare-front-desk/vars.env

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,6 @@ NVIDIA_API_KEY="nvapi-"
22
TAVILY_API_KEY="tvly-"
33
RIVA_API_URI=grpc.nvcf.nvidia.com:443
44
RIVA_ASR_FUNCTION_ID=1598d209-5e27-4d3c-8079-4751568b1081
5-
RIVA_TTS_FUNCTION_ID=0149dedb-2be8-4195-b9a0-e57e0e14f972
5+
RIVA_TTS_FUNCTION_ID=0149dedb-2be8-4195-b9a0-e57e0e14f972
6+
BASE_URL="https://integrate.api.nvidia.com/v1"
7+
LLM_MODEL="meta/llama-3.3-70b-instruct"

0 commit comments

Comments
 (0)