diff --git a/industries/healthcare/agentic-healthcare-front-desk/Dockerfile b/industries/healthcare/agentic-healthcare-front-desk/Dockerfile deleted file mode 100644 index ba286a73c..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/Dockerfile +++ /dev/null @@ -1,35 +0,0 @@ -ARG BASE_IMAGE_URL=nvcr.io/nvidia/base/ubuntu -ARG BASE_IMAGE_TAG=22.04_20240212 - -FROM ${BASE_IMAGE_URL}:${BASE_IMAGE_TAG} - -ENV DEBIAN_FRONTEND noninteractive -# Install required ubuntu packages for setting up python 3.10 -RUN apt update && \ - apt install -y curl software-properties-common libgl1 libglib2.0-0 && \ - add-apt-repository ppa:deadsnakes/ppa && \ - apt update && apt install -y python3.10 && \ - apt-get clean - -# Install pip for python3.10 -RUN curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10 - -RUN rm -rf /var/lib/apt/lists/* - -# Uninstall build packages -RUN apt autoremove -y curl software-properties-common - - -# install dependencies -RUN apt update && apt-get install -y python3-dev build-essential -RUN pip install --upgrade pip && pip install --upgrade setuptools -COPY ./requirements.txt /opt/requirements.txt -RUN pip3 install --no-cache-dir --use-pep517 -r /opt/requirements.txt - - -COPY ./ /app - -WORKDIR /app - -#ENTRYPOINT ["python", "chain_server.py"] - diff --git a/industries/healthcare/agentic-healthcare-front-desk/README.md b/industries/healthcare/agentic-healthcare-front-desk/README.md index 72d3a8da2..9edff8879 100644 --- a/industries/healthcare/agentic-healthcare-front-desk/README.md +++ b/industries/healthcare/agentic-healthcare-front-desk/README.md @@ -1,197 +1,5 @@ # Agentic Healthcare Front Desk -![](./images/architecture_diagram.png) - -An agentic healthcare front desk can assist patients and the healthcare professionals in various scanarios: it can assist with the new patient intake process, going over each of the fields in a enw patient form with the patients; it can assist with the appointment scheduling process, looking up available appointments and booking them for patients after conversing with the patient to find out their needs; it can help look up the patient's medications and general information on the medications, and more. - -The front desk assistant contains agentic LLM NIM with tools calling capabilities implemented in the LangGraph framework. - -Follow along this repository to see how you can create your own digital human for Healthcare front desk that combines NVIDIA NIM, ACE Microservices, RIVA ASR and TTS. - -We will offer two options for interacting with the agentic healthcare front desk: with a text / voice based Gradio UI or with a digital human avatar you can converse with. -![](./images/repo_overview_structure_diagram.png) - -> [!NOTE] -> Currently, there is a higher latency expected during LLM tool calling. Interaction with the agent could take a few seconds for non tool calling responses, and could take much higher (30+ seconds) for tool calling responses. If you're utilizing the NVIDIA AI Endpoints for the LLM, which is the default for this repo, latency can vary depending on the traffic to the endpoints. An improvement to this tool call latency issue is in development for the LLM NIMs, please stay tuned. - -> [!IMPORTANT] -> Integration with ACE is under active development and will be available soon. - -[NVIDIA ACE](https://developer.nvidia.com/ace) is a suite of real-time AI solutions for end-to-end development of interactive avatars and digital human applications at-scale. Its customizable microservices offer the fastest and most versatile solution for bringing avatars to life at-scale. The image below from the GitHub repository for `NIM Agent Blueprint: Digital Human for Customer Service` show the components in the ACE stack on the left side of the dotted line. The components shown on the right side of the dotted line starting from `Fast API` will be replaced by our own components in the agentic healthcare front desk. - -![](./images/ACE_diagram.png) - -## Table of Contents -1. [Introduction](#introduction) -2. [Prerequisites](#prerequisites) -3. [Run Instructions](#run-instructions) -4. [Customization](#customization) - - -## Introduction -In this repository, we demonstrate the following: -* A customer care agent in Langgraph that has three specialist assistants: patient intake, medication assistance, and appointment making, with corresponding tools. -* A customer care agent in Langgraph for patient intake only. -* A customer care agent in Langgraph for appointment making only. -* A customer care agent in Langgraph for medication lookup only. -* A Gradio based UI that allows us to use voice or typing to converse with any of the four agents. -* A chain server. - -The agentic tool calling capability in each of the customer care assistants is powered by LLM NIMs - NVIDIA Inference Microservices. With the agentic capability, you can write your own tools to be utilized by LLMs. - -## Prerequisites -### Hardware -There are no local GPU requirements for running any application in this repo. The LLMs utilized in LangGraph in this repo are by default set to calling NVIDIA AI Endpoints since `BASE_URL` is set to the default value of `"https://integrate.api.nvidia.com/v1"` in [vars.env](./vars.env), and require a valid NVIDIA API KEY. As seen in the [graph definitions](./graph_definitions/): -```python -from langchain_nvidia_ai_endpoints import ChatNVIDIA -assistant_llm = ChatNVIDIA(model=llm_model, ...) -``` -You can experiment with other LLMs available on build.nvidia.com by changing the `LLM_MODEL` values in [vars.env](./vars.env), for passing into `ChatNVIDIA` in the Python files in the directory [`graph_definitions/`](./graph_definitions/). - -If instead of calling NVIDIA AI Endpoints with an API key, you would like to host your own LLM NIM instance, please refer to the [Docker tab of the LLM NIM](https://build.nvidia.com/meta/llama-3_1-70b-instruct?snippet_tab=Docker) on how to host, and changed the `BASE_URL` parameter in [vars.env](./vars.env) to [point to your own instance](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints/#working-with-nvidia-nims) when specifying `ChatNVIDIA` in the Python files in the directory [`graph_definitions/`](./graph_definitions/). For the hardware configuration of self hosting the LLM, please refer to the [documentation for LLM support matrix](https://docs.nvidia.com/nim/large-language-models/latest/support-matrix.html). - -### NVIDIA API KEY -You will need an NVIDIA API KEY to call NVIDIA AI Endpoints. You can use different model API endpoints with the same API key, so even if you change the LLM specification in `ChatNVIDIA(model=llm_model)` you can still use the same API KEY. - -a. Navigate to [https://build.nvidia.com/explore/discover](https://build.nvidia.com/explore/discover). - -b. Find the **Llama 3.1 70B Instruct** card and click the card. - -![Llama 3 70B Instruct model card](./images/llama31-70b-instruct-model-card.png) - -c. Click **Get API Key**. - -![API section of the model page.](./images/llama31-70b-instruct-get-api-key.png) -Log in if you haven't already. - -d. Click **Generate Key**. - -![Generate key window.](./images/api-catalog-generate-api-key.png) - -e. Click **Copy Key** and then save the API key. The key begins with the letters ``nvapi-``. - -![Key Generated window.](./images/key-generated.png) - - -### Software - -- Linux operating systems (Ubuntu 22.04 or later recommended) -- [Docker](https://docs.docker.com/engine/install/) -- [Docker Compose](https://docs.docker.com/compose/install/) - - - -## Run Instructions - -As illustrated in the diagrams in the beginning, in this repo, we could run two types of applications, one is a FastAPI-based chain server, the other one is a simple voice/text Gradio UI for the healthcare agent. In this documentation, we will be showing how to use the Gradio UI, with instructions for connecting the chain server to ACE coming soon. - -Regardless of the type of application you'd like to run, first, please add your API Keys. - -### 1. Add Your API keys Prior to Running -In the file `vars.env`, add two API keys of your own: -``` -NVIDIA_API_KEY="nvapi-" -TAVILY_API_KEY="tvly-" -``` -Note the Tavily key is only required if you want to run the full graph or the medication lookup graph. Get your API Key from the [Tavily website](https://app.tavily.com/). This is used in the tool named `medication_instruction_search_tool` in [`graph.py`](./graph_definitions/graph.py) or [`graph_medication_lookup_only.py`](./graph_definitions/graph_medication_lookup_only.py). - -### 2. Running the simple voice/text Gradio UI -To spin up a simple Gradio based web UI that allows us to converse with one of the agents via voice or typing, run one of these following services. - -##### 2.1 The patient intake agent -Run the patient intake only agent. - -```sh -# to run the container with the assumption we have done build: -docker compose up -d patient-intake-ui -# or to build at this command: -docker compose up --build -d patient-intake-ui -``` - -Note this will be running on port 7860 by default. If you need to run on a different port, modify the [`docker-compose.yaml`](./docker-compose.yaml) file's `patient-intake-ui` section and replace all mentions of 7860 with your own port number. - -[Launch the web UI](#25-launch-the-web-ui) on your Chrome browser, you should see this interface: -![](./images/example_ui.png) - -To bring down the patient intake UI: -```sh -docker compose down patient-intake-ui -``` - - -##### 2.2 The appointment making agent -Run the appointment making only agent. -```sh -# to run the container with the assumption we have done build: -docker compose up -d appointment-making-ui -# or to build at this command: -docker compose up --build -d appointment-making-ui -``` - -Note this will be running on port 7860 by default. If you need to run on a different port, modify the [`docker-compose.yaml`](./docker-compose.yaml) file's `appointment-making-ui` section and replace all mentions of 7860 with your own port number. - -[Launch the web UI](#25-launch-the-web-ui) on your Chrome browser, you should see the same web interface as above. - -To bring down the appointment making UI: -```sh -docker compose down appointment-making-ui -``` - -##### 2.3 The full agent -Run the full agent comprising of three specialist agents. -```sh -# to run the container with the assumption we have done build: -docker compose up -d full-agent-ui -# or to build at this command: -docker compose up --build -d full-agent-ui -``` - -Note this will be running on port 7860 by default. If you need to run on a different port, modify the [`docker-compose.yaml`](./docker-compose.yaml) file's `full-agent-ui` section and replace all mentions of 7860 with your own port number. - -[Launch the web UI](#25-launch-the-web-ui) on your Chrome browser, you should see the same web interface as above. - -To bring down the full agent UI: -```sh -docker compose down full-agent-ui -``` - -##### 2.4 The medication lookup agent -Run the medication lookup only agent. - -```sh -# to run the container with the assumption we have done build: -docker compose up -d medication-lookup-ui -# or to build at this command: -docker compose up --build -d medication-lookup-ui -``` - -Note this will be running on port 7860 by default. If you need to run on a different port, modify the [`docker-compose.yaml`](./docker-compose.yaml) file's `medication-lookup-ui` section and replace all mentions of 7860 with your own port number. - -[Launch the web UI](#25-launch-the-web-ui) on your Chrome browser, you should see the same web interface as above. - -To bring down the medication lookup UI: -```sh -docker compose down medication-lookup-ui -``` - -##### 2.5 Launch the web UI - -Go to your web browser, here we have tested with Google Chrome, and type in `:`. The port number would be `7860` by default, or your modified port number if you changed the port number in [docker-compose.yaml](./docker-compose.yaml). Please note that, before you can use your mic/speaker to interact with the web UI, you will need to enable the origin ([reference](https://github.com/NVIDIA/GenerativeAIExamples/blob/main/docs/using-sample-web-application.md#troubleshooting)): - -If you receive the following `"Media devices could not be accessed"` error message when you first attempt to transcribe a voice query, perform the following steps. - -1. Open another browser tab and enter `chrome://flags` in the location field. - -1. Enter `insecure origins treated as secure` in the search field. - -1. Enter `http://:7860` (or your own port number) in the text box and select **Enabled** from the menu. - -1. Click **Relaunch**. - -1. After the browser opens, grant `http://host-ip:7860` (or your own port number) access to your microphone. - -1. Retry your request on the web UI. - -## Customization -To customize for your own agentic LLM in LangGraph with your own tools, the [LangGraph tutorial on customer support](https://langchain-ai.github.io/langgraph/tutorials/customer-support/customer-support/) is helpful, where you'll find detailed explanations and steps of creating tools and agentic LLM in LangGraph. Afterwards, you can create your own file similar to the graph files in [`graph_definitions/`](./graph_definitions/) which can connect to the simple voice/text Gradio UI by calling [`launch_demo_ui`](./graph_definitions/graph_patient_intake_only.py#L184), or can be imported by the [chain server](./chain_server/chain_server.py#L31). +The contents of this Agentic Healthcare Front Desk repository have been updated and migrated to the `ambient-patient` Developer Example's `agent/` repository located at https://github.com/NVIDIA-AI-Blueprints/ambient-patient/tree/main/agent. +In addition to the updated agent backend of the heatlhcare front desk, the [`ambient-patient`](https://github.com/NVIDIA-AI-Blueprints/ambient-patient) Developer Example integrates a voice interaction agent front end. diff --git a/industries/healthcare/agentic-healthcare-front-desk/chain_server/chain_server.py b/industries/healthcare/agentic-healthcare-front-desk/chain_server/chain_server.py deleted file mode 100644 index 6160d0c8e..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/chain_server/chain_server.py +++ /dev/null @@ -1,284 +0,0 @@ -import uvicorn -import argparse - - -from fastapi import FastAPI, Request, status -from fastapi.middleware.cors import CORSMiddleware -from fastapi.responses import StreamingResponse - -from pydantic import BaseModel, Field, constr, validator -import bleach -from typing import List, Generator - -import uuid -from uuid import uuid4 -import sys -import os - -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(os.path.dirname(SCRIPT_DIR)) - -parser = argparse.ArgumentParser() -parser.add_argument("--assistant", choices=["full", "intake", "appointment", "medication"], - help = "Specify full for the full graph with main assistant routing to specialist assistants. " - "Specify intake for the patient intake assistant only. " - "Specify appointment for the appointemnt making assistant only." - "Specify medication for the medication lookup assistant only") -parser.add_argument("--port", type=int, default=8081, - help = "Specify the port number for the chain server to run at.") - -args = parser.parse_args() -if args.assistant == "full": - from graph_definitions.graph import full_graph - assistant_graph = full_graph -elif args.assistant == "intake": - from graph_definitions.graph_patient_intake_only import intake_graph - assistant_graph = intake_graph -elif args.assistant == "appointment": - from graph_definitions.graph_appointment_making_only import appt_graph - assistant_graph = appt_graph -elif args.assistant == "medication": - from graph_definitions.graph_medication_lookup_only import medication_lookup_graph - assistant_graph = medication_lookup_graph -else: - raise Exception("We must specify one of the three options for assistant: full, intake or appointment.") - -port = args.port - -# create the FastAPI server -app = FastAPI() - -# Allow access in browser from RAG UI and Storybook (development) -origins = ["*"] -app.add_middleware( - CORSMiddleware, allow_origins=origins, allow_credentials=False, allow_methods=["*"], allow_headers=["*"], -) - -class Message(BaseModel): - """Definition of the Chat Message type.""" - - role: str = Field( - description="Role for a message AI, User and System", default="user", max_length=256, pattern=r'[\s\S]*' - ) - content: str = Field( - description="The input query/prompt to the pipeline.", - default="I am going to Paris, what should I see?", - max_length=131072, - pattern=r'[\s\S]*', - ) - - @validator('role') - def validate_role(cls, value): - """ Field validator function to validate values of the field role""" - value = bleach.clean(value, strip=True) - valid_roles = {'user', 'assistant', 'system'} - if value.lower() not in valid_roles: - raise ValueError("Role must be one of 'user', 'assistant', or 'system'") - return value.lower() - - @validator('content') - def sanitize_content(cls, v): - """ Feild validator function to santize user populated feilds from HTML""" - return bleach.clean(v, strip=True) - -class Prompt(BaseModel): - """Definition of the Prompt API data type.""" - - messages: List[Message] = Field( - ..., - description="A list of messages comprising the conversation so far. The roles of the messages must be alternating between user and assistant. The last input message should have role user. A message with the the system role is optional, and must be the very first message if it is present.", - max_items=50000, - ) - temperature: float = Field( - 0.2, - description="The sampling temperature to use for text generation. The higher the temperature value is, the less deterministic the output text will be. It is not recommended to modify both temperature and top_p in the same call.", - ge=0.1, - le=1.0, - ) - top_p: float = Field( - 0.7, - description="The top-p sampling mass used for text generation. The top-p value determines the probability mass that is sampled at sampling time. For example, if top_p = 0.2, only the most likely tokens (summing to 0.2 cumulative probability) will be sampled. It is not recommended to modify both temperature and top_p in the same call.", - ge=0.1, - le=1.0, - ) - max_tokens: int = Field( - 1024, - description="The maximum number of tokens to generate in any given call. Note that the model is not aware of this value, and generation will simply stop at the number of tokens specified.", - ge=0, - le=1024, - format="int64", - ) - - stop: List[constr(max_length=256, pattern=r'[\s\S]*')] = Field( - description="A string or a list of strings where the API will stop generating further tokens. The returned text will not contain the stop sequence.", - max_items=256, - default=[], - ) - # stream: bool = Field(True, description="If set, partial message deltas will be sent. Tokens will be sent as data-only server-sent events (SSE) as they become available (JSON responses are prefixed by data:), with the stream terminated by a data: [DONE] message.") - - @validator('temperature') - def sanitize_temperature(cls, v): - """ Feild validator function to santize user populated feilds from HTML""" - return float(bleach.clean(str(v), strip=True)) - - @validator('top_p') - def sanitize_top_p(cls, v): - """ Feild validator function to santize user populated feilds from HTML""" - return float(bleach.clean(str(v), strip=True)) - -class ChainResponseChoices(BaseModel): - """ Definition of Chain response choices""" - - index: int = Field(default=0, ge=0, le=256, format="int64") - message: Message = Field(default=Message(role="assistant", content="")) - finish_reason: str = Field(default="", max_length=4096, pattern=r'[\s\S]*') - -class ChainResponse(BaseModel): - """Definition of Chain APIs resopnse data type""" - - id: str = Field(default="", max_length=100000, pattern=r'[\s\S]*') - choices: List[ChainResponseChoices] = Field(default=[], max_items=256) - -class HealthResponse(BaseModel): - message: str = Field(max_length=4096, pattern=r'[\s\S]*', default="") -class HealthCheck(BaseModel): - status: str = "OK" - -def get_new_thread_id(): - return str(uuid.uuid4()) -thread_id = get_new_thread_id() - -langgraph_config = { - "configurable": { - # Checkpoints are accessed by thread_id - "thread_id": thread_id, - } -} - -def _print_event(event: dict, _printed: set, max_length=1500): - return_print = "" - current_state = event.get("dialog_state") - if current_state: - print("Currently in: ", current_state[-1]) - return_print += "Currently in: " - return_print += current_state[-1] - return_print += "\n" - message = event.get("messages") - latest_msg_chatbot = "" - if message: - if isinstance(message, list): - message = message[-1] - if message.id not in _printed: - msg_repr = message.pretty_repr() - msg_repr_chatbot = str(message.content) - if len(msg_repr) > max_length: - msg_repr = msg_repr[:max_length] + " ... (truncated)" - msg_repr_chatbot = msg_repr_chatbot[:max_length] + " ... (truncated)" - return_print += msg_repr - latest_msg_chatbot = msg_repr_chatbot - print(msg_repr) - _printed.add(message.id) - return_print += "\n" - return return_print, latest_msg_chatbot - -def example_llm_chain(query: str, chat_history: List["Message"], **kwargs) -> Generator[str, None, None]: - """Execute a Retrieval Augmented Generation chain using the components defined above.""" - - print("thread_id", langgraph_config["configurable"]["thread_id"]) - - _printed = set() - events = assistant_graph.stream( - {"messages": ("user", query)}, langgraph_config, stream_mode="values" - ) - latest_response = "" - for event in events: - return_print, latest_msg = _print_event(event, _printed) - if latest_msg != "": - latest_response = latest_msg - yield latest_response - - -@app.post( - "/generate", - response_model=ChainResponse, - responses={ - 500: { - "description": "Internal Server Error", - "content": {"application/json": {"example": {"detail": "Internal server error occurred"}}}, - } - }, -) -async def generate_answer(request: Request, prompt: Prompt) -> StreamingResponse: - """Generate and stream the response to the provided prompt.""" - - chat_history = prompt.messages - # The last user message will be the query for the rag or llm chain - last_user_message = next((message.content for message in reversed(chat_history) if message.role == 'user'), None) - - # Find and remove the last user message if present - for i in reversed(range(len(chat_history))): - if chat_history[i].role == 'user': - del chat_history[i] - break # Remove only the last user message - - # All the other information from the prompt like the temperature, top_p etc., are llm_settings - llm_settings = {key: value for key, value in vars(prompt).items() if key not in ['messages']} - try: - generator = None - # call llm_chain since we're not doing knowledge base - generator = example_llm_chain(query=last_user_message, chat_history=chat_history, **llm_settings) - - def response_generator(): - """Convert generator streaming response into `data: ChainResponse` format for chunk - """ - # unique response id for every query - resp_id = str(uuid4()) - if generator: - # Create ChainResponse object for every token generated - for chunk in generator: - chain_response = ChainResponse() - response_choice = ChainResponseChoices(index=0, message=Message(role="assistant", content=chunk)) - chain_response.id = resp_id - chain_response.choices.append(response_choice) - - # Send generator with tokens in ChainResponse format - yield "data: " + str(chain_response.json()) + "\n\n" - chain_response = ChainResponse() - - # [DONE] indicate end of response from server - response_choice = ChainResponseChoices(finish_reason="[DONE]") - chain_response.id = resp_id - chain_response.choices.append(response_choice) - - yield "data: " + str(chain_response.json()) + "\n\n" - else: - chain_response = ChainResponse() - yield "data: " + str(chain_response.json()) + "\n\n" - - return StreamingResponse(response_generator(), media_type="text/event-stream") - - except Exception as e: - exception_msg = "Error from chain server. Please check chain-server logs for more details." - chain_response = ChainResponse() - response_choice = ChainResponseChoices( - index=0, message=Message(role="assistant", content=exception_msg), finish_reason="[DONE]" - ) - chain_response.choices.append(response_choice) - return StreamingResponse( - iter(["data: " + str(chain_response.json()) + "\n\n"]), media_type="text/event-stream", status_code=500 - ) - -@app.get("/health", - tags=["healthcheck"], - summary="Perform a Health Check", - response_description="Return HTTP Status Code 200 (OK)", - status_code=status.HTTP_200_OK, - response_model=HealthCheck) -def get_health() -> HealthCheck: - return HealthCheck(status="OK") - - -if __name__ == "__main__": - - - uvicorn.run(app, host="0.0.0.0", port=port) \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/chain_server/test_chain_server.ipynb b/industries/healthcare/agentic-healthcare-front-desk/chain_server/test_chain_server.ipynb deleted file mode 100644 index af94a29c3..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/chain_server/test_chain_server.ipynb +++ /dev/null @@ -1,71 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": 6, - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "We've already said hello. Let's get started with your registration. Could you please tell me your full name, as it appears on your identification? This will help me get you checked in and make sure all your information is accurate.--- 1.9984607696533203 seconds ---\n" - ] - } - ], - "source": [ - "import time\n", - "import json\n", - "\n", - "import requests\n", - "data = {\n", - " \"messages\": [\n", - " {\n", - " \"role\": \"user\",\n", - " \"content\": \"hello how are you?\"\n", - " }\n", - " ],\n", - " \n", - " \"max_tokens\": 256\n", - "}\n", - "\n", - "url = \"http://0.0.0.0:8081/generate\"\n", - "\n", - "start_time = time.time()\n", - "with requests.post(url, stream=True, json=data) as req:\n", - " for chunk in req.iter_lines():\n", - " raw_resp = chunk.decode(\"UTF-8\")\n", - " if not raw_resp:\n", - " continue\n", - " resp_dict = json.loads(raw_resp[6:])\n", - " resp_choices = resp_dict.get(\"choices\", [])\n", - " if len(resp_choices):\n", - " resp_str = resp_choices[0].get(\"message\", {}).get(\"content\", \"\")\n", - " print(resp_str, end =\"\")\n", - "\n", - "print(f\"--- {time.time() - start_time} seconds ---\")" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.14" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/industries/healthcare/agentic-healthcare-front-desk/docker-compose.yaml b/industries/healthcare/agentic-healthcare-front-desk/docker-compose.yaml deleted file mode 100644 index 8bc03afc3..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/docker-compose.yaml +++ /dev/null @@ -1,92 +0,0 @@ -services: - chain-server: - container_name: chain-server-healthcare-assistant - image: chain-server-healthcare-assistant:${TAG:-latest} - env_file: - - path: ./vars.env - required: true - build: - context: ./ - dockerfile: Dockerfile - entrypoint: python3 chain_server/chain_server.py --assistant intake --port 8081 - ports: - - "8081:8081" - expose: - - "8081" - volumes: - - ./graph_definitions/graph_images:/graph_images - shm_size: 5gb - - - patient-intake-ui: - container_name: patient-intake-ui - image: patient-intake-ui:${TAG:-latest} - env_file: - - path: ./vars.env - required: true - build: - context: ./ - dockerfile: Dockerfile - entrypoint: python3 graph_definitions/graph_patient_intake_only.py --port 7860 - ports: - - "7860:7860" - expose: - - "7860" - volumes: - - ./graph_definitions/graph_images:/graph_images - shm_size: 5gb - - - appointment-making-ui: - container_name: appointment-making-ui - image: appointment-making-ui:${TAG:-latest} - env_file: - - path: ./vars.env - required: true - build: - context: ./ - dockerfile: Dockerfile - entrypoint: python3 graph_definitions/graph_appointment_making_only.py --port 7860 - ports: - - "7860:7860" - expose: - - "7860" - volumes: - - ./graph_definitions/graph_images:/graph_images - shm_size: 5gb - - medication-lookup-ui: - container_name: medication-lookup-ui - image: medication-lookup-ui:${TAG:-latest} - env_file: - - path: ./vars.env - required: true - build: - context: ./ - dockerfile: Dockerfile - entrypoint: python3 graph_definitions/graph_medication_lookup_only.py --port 7860 - ports: - - "7860:7860" - expose: - - "7860" - volumes: - - ./graph_definitions/graph_images:/graph_images - shm_size: 5gb - - full-agent-ui: - container_name: full-agent-ui - image: full-agent-ui:${TAG:-latest} - env_file: - - path: ./vars.env - required: true - build: - context: ./ - dockerfile: Dockerfile - entrypoint: python3 graph_definitions/graph.py --port 7860 - ports: - - "7860:7860" - expose: - - "7860" - volumes: - - ./graph_definitions/graph_images:/graph_images - shm_size: 5gb \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/__init__.py b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/__init__.py deleted file mode 100644 index d50bcc445..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: Apache-2.0 -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph.py b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph.py deleted file mode 100644 index 98fd33e00..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph.py +++ /dev/null @@ -1,732 +0,0 @@ -from dotenv import load_dotenv -import os - -from langchain_nvidia_ai_endpoints import ChatNVIDIA - -from fhirclient import client -from fhirclient.models.patient import Patient -from fhirclient.models.medication import Medication -from fhirclient.models.medicationrequest import MedicationRequest -from fhirclient.models.appointment import Appointment - -from typing import Optional -import sqlite3 -import pandas as pd - -import datetime - -from enum import Enum - -import shutil - -from langchain_community.tools.tavily_search import TavilySearchResults - -from langchain_core.tools import tool -from langchain_core.prompts import ChatPromptTemplate -from langchain_core.pydantic_v1 import BaseModel, Field -from langchain_core.runnables import Runnable, RunnableConfig -from langchain_core.runnables import RunnableLambda -from langchain_core.messages import ToolMessage - - -from langgraph.graph.message import AnyMessage, add_messages -from langgraph.prebuilt import ToolNode, tools_condition -from langgraph.checkpoint.memory import MemorySaver -from langgraph.graph import END, StateGraph, START - -import sys -from typing import Any, Dict, List, Tuple, Literal, Callable, Annotated, Literal, Optional -from typing_extensions import TypedDict - -import argparse - -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(os.path.dirname(SCRIPT_DIR)) -from utils.ui import launch_demo_ui - -################# -### variables ### -################# -patient_id = '14867dba-fb11-4df3-9829-8e8e081b39e6' # test patient id from looking through https://launch.smarthealthit.org/ -save_graph_to_png = True - -env_var_file = "vars.env" - -local_file_constant = "sample_db/test_db.sqlite" -local_file_current = "sample_db/test_db_tmp_copy.sqlite" - - - -######################### -### get env variables ### -######################### -load_dotenv(env_var_file) # This line brings all environment variables from vars.env into os.environ -print("Your NVIDIA_API_KEY is set to: ", os.environ['NVIDIA_API_KEY']) -print("Your TAVILY_API_KEY is set to: ", os.environ['TAVILY_API_KEY']) - -assert os.environ['NVIDIA_API_KEY'] is not None, "Make sure you have your NVIDIA_API_KEY exported as a environment variable!" -assert os.environ['TAVILY_API_KEY'] is not None, "Make sure you have your TAVILY_API_KEY exported as a environment variable!" - -NVIDIA_API_KEY=os.getenv("NVIDIA_API_KEY", None) -RIVA_API_URI = os.getenv("RIVA_API_URI", None) -RIVA_ASR_FUNCTION_ID = os.getenv("RIVA_ASR_FUNCTION_ID", None) -RIVA_TTS_FUNCTION_ID = os.getenv("RIVA_TTS_FUNCTION_ID", None) - - -assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!" -main_llm_model = os.getenv("LLM_MODEL", None) - -assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!" -specialized_llm_model = os.getenv("LLM_MODEL", None) - -assert os.environ['BASE_URL'] is not None, "Make sure you have your BASE_URL exported as a environment variable!" -base_url = os.getenv("BASE_URL", None) - -### define which llm to use -main_assistant_llm = ChatNVIDIA(model=main_llm_model, base_url=base_url) -specialized_assistant_llm = ChatNVIDIA(model=specialized_llm_model, base_url=base_url) - -def update_dialog_stack(left: list[str], right: Optional[str]) -> list[str]: - """Push or pop the state.""" - if right is None: - return left - if right == "pop": - return left[:-1] - return left + [right] - -class State(TypedDict): - messages: Annotated[list[AnyMessage], add_messages] - user_info: str - dialog_state: Annotated[ - list[ - Literal[ - "assistant", - "medication_assistant", - "appointment_assistant", - ] - ], - update_dialog_stack, - ] - -######################## -### Define the tools ### -######################## - -settings = { - 'app_id': 'my_web_app', - 'api_base': 'https://r4.smarthealthit.org' -} - -smart = client.FHIRClient(settings=settings) - -@tool -def get_patient_dob() -> str: - """Retrieve the patient's date of birth.""" - patient = Patient.read(patient_id, smart.server) - return patient.birthDate.isostring - -@tool -def get_patient_medications() -> list: - """Retrieve the patient's list of medications.""" - def _med_name(med): - if med.coding: - name = next((coding.display for coding in med.coding if coding.system == 'http://www.nlm.nih.gov/research/umls/rxnorm'), None) - if name: - return name - if med.text and med.text: - return med.text - return "Unnamed Medication(TM)" - - def _get_medication_by_ref(ref, smart): - med_id = ref.split("/")[1] - return Medication.read(med_id, smart.server).code - - def _get_med_name(prescription, client=None): - if prescription.medicationCodeableConcept is not None: - med = prescription.medicationCodeableConcept - return _med_name(med) - elif prescription.medicationReference is not None and client is not None: - med = _get_medication_by_ref(prescription.medicationReference.reference, client) - return _med_name(med) - else: - return 'Error: medication not found' - - # test patient id from looking through https://launch.smarthealthit.org/ - bundle = MedicationRequest.where({'patient': patient_id}).perform(smart.server) - prescriptions = [be.resource for be in bundle.entry] if bundle is not None and bundle.entry is not None else None - - return [_get_med_name(p, smart) for p in prescriptions] - -class ApptType(Enum): - adult_physicals = "Adult physicals" - pediatric_physicals = "Pediatric physicals" - follow_up_appointments = "Follow-up appointments" - sick_visits = "Sick visits" - flu_shots = "Flu shots" - other_vaccinations = "Other vaccinations" - allergy_shots = "Allergy shots" - b12_injections = "B12 injections" - diabetes_management = "Diabetes management" - hypertension_management = "Hypertension management" - asthma_management = "Asthma management" - chronic_pain_management = "Chronic pain management" - initial_mental_health = "Initial mental health consultations" - follow_up_mental_health = "Follow-up mental health appointments" - therapy_session = "Therapy sessions" - blood_draws = "Blood draws" - urine_tests = "Urine tests" - ekgs = "EKGs" - biopsies = "Biopsies" - medication_management = "Medication management" - wart_removals = "Wart removals" - skin_tag_removals = "Skin tag removals" - ear_wax_removals = "Ear wax removals" - -@tool -def find_available_appointments( - appointment_type: ApptType, - start_date: Optional[datetime.date] = None, - end_date: Optional[datetime.date] = None, -) -> list: - """Look for available new appointments.""" - - - shutil.copyfile(local_file_constant, local_file_current) - - conn = sqlite3.connect(local_file_current) - - query_datetime = "SELECT * from appointment_schedule WHERE appointment_type = \"{}\" AND patient IS NULL ".format(appointment_type.value) - - - #start_date, end_date = datetime.date.fromisoformat(start_date), datetime.date.fromisoformat(end_date) - if start_date: - query_datetime += " AND datetime >= \"{}\"".format(start_date) - - if end_date: - query_datetime += " AND datetime <= \"{}\"".format(end_date + datetime.timedelta(hours=24) ) - print(query_datetime) - - available_appts = pd.read_sql( - query_datetime, conn - ) - - print(available_appts) - conn.close() - return list(available_appts['datetime']) - -@tool -def book_appointment( - appointment_datetime: datetime.datetime, - appointment_type: ApptType, -)-> pd.DataFrame: - """Book new appointments.""" - - conn = sqlite3.connect(local_file_current) - - booking_query = "UPDATE appointment_schedule SET patient = \"current_patient\" WHERE datetime = \"{}\" AND appointment_type = \"{}\"".format(appointment_datetime, appointment_type.value) - - cur = conn.cursor() - cur.execute(booking_query) - - find_patient_appt_query = "SELECT * from appointment_schedule where patient = \"current_patient\"" - - current_patient_entries = pd.read_sql(find_patient_appt_query, conn, index_col="index") - - cur.close() - conn.close() - - - return current_patient_entries - - -medication_instruction_search_tool = TavilySearchResults( - description="Search online for instructions related the patient's requested medication. Do not use to give medical advice." -) - -# In this tool we illustrate how you can define -# the different data fields that are needed for -# patient intake and the agentic llm will gather each field. -# Here we are only printing each of the fields for illustration -# of the tool, however in your own use case, you would likely want -# to make API calls to transmit the gathered data fields back -# to your own database. -@tool -def print_gathered_patient_info( - patient_name: str, - patient_dob: datetime.date, - allergies_medication: List[str], - current_symptoms: str, - current_symptoms_duration: str, - pharmacy_location: str -): - """This function prints out and transmits the gathered information for each patient intake field: - patient_name is the patient name, - patient_dob is the patient date of birth, - allergies_medication is a list of allergies in medication for the patient, - current_symptoms is a description of the current symptoms for the patient, - current_symptoms_duration is the time duration of current symptoms, - pharmacy_location is the patient pharmacy location. """ - - print(patient_name) - print(patient_dob) - print(allergies_medication) - print(current_symptoms) - print(current_symptoms_duration) - print(pharmacy_location) - -class Assistant: - def __init__(self, runnable: Runnable): - self.runnable = runnable - - def __call__(self, state: State, config: RunnableConfig): - while True: - result = self.runnable.invoke(state) - - if not result.tool_calls and ( - not result.content - or isinstance(result.content, list) - and not result.content[0].get("text") - ): - messages = state["messages"] + [("user", "Respond with a real output.")] - state = {**state, "messages": messages} - messages = state["messages"] + [("user", "Respond with a real output.")] - state = {**state, "messages": messages} - else: - break - return {"messages": result} - -class CompleteOrEscalate(BaseModel): - """A tool to mark the current task as completed and/or to escalate control of the dialog to the main assistant, - who can re-route the dialog based on the user's needs.""" - - cancel: bool = True - reason: str - - class Config: - schema_extra = { - "example": { - "cancel": True, - "reason": "User changed their mind about the current task.", - }, - "example 2": { - "cancel": True, - "reason": "I have fully completed the task.", - }, - "example 3": { - "cancel": False, - "reason": "I need to search the user's emails or calendar for more information.", - }, - } - - -# Medication assistant -with open('/app/graph_definitions/system_prompts/medication_lookup_system_prompt.txt', 'r') as file: - prompt = file.read() -medication_prompt = ChatPromptTemplate.from_messages( - [ - ("system", prompt), - ("placeholder", "{messages}"), - ] -) - -medication_safe_tools = [get_patient_medications, get_patient_dob, medication_instruction_search_tool] # to add later -medication_sensitive_tools = [] # to add later -medication_tools = medication_safe_tools + medication_sensitive_tools -medication_runnable = medication_prompt | specialized_assistant_llm.bind_tools( - medication_tools + [CompleteOrEscalate] -) - -with open('/app/graph_definitions/system_prompts/appointment_system_prompt.txt', 'r') as file: - guidelines_for_scheduling_appointment = file.read() - -appointment_prompt = ChatPromptTemplate.from_messages( - [ - ( - "system", - "You are a helpful customer support assistant for healthcare appointment scheduling. Your purpose is to help patients with looking up and making appointments." - "Use the provided tools as necessary." - "\nCurrent date and time: {time}." - "\nGuidelines for scheduling appointments: {guidelines_for_scheduling_appointment}." - ), - ("placeholder", "{messages}"), - ] -).partial(time=datetime.datetime.now(), guidelines_for_scheduling_appointment=guidelines_for_scheduling_appointment) - -appointment_safe_tools = [find_available_appointments, book_appointment] -appointment_sensitive_tools = [] -appointment_tools = appointment_safe_tools + appointment_sensitive_tools -# appointment_assistant_runnable = appointment_assistant_prompt | llm -appointment_runnable = appointment_prompt | specialized_assistant_llm.bind_tools( - appointment_tools + [CompleteOrEscalate] -) - - -# Patient Intake assistant -with open('/app/graph_definitions/system_prompts/patient_intake_system_prompt.txt', 'r') as file: - intake_system_prompt = file.read() -patient_intake_prompt = ChatPromptTemplate.from_messages( - [ - ("system", intake_system_prompt), - ("placeholder", "{messages}"), - ] -) -patient_intake_safe_tools = [print_gathered_patient_info] -patient_intake_sensitive_tools = [] -patient_intake_tools = patient_intake_safe_tools + patient_intake_sensitive_tools -patient_intake_runnable = patient_intake_prompt | specialized_assistant_llm.bind_tools( - patient_intake_tools + [CompleteOrEscalate] -) -class ToPatientIntakeAssistant(BaseModel): - """Transfers work to a specialized assistant to handle patient intake.""" - patient_name: str = Field(description="The patient's name.") - patient_dob: datetime.date = Field(description="The patient's date of birth.") - allergies_medication: List[str] = Field(description="A list of allergies in medication for the patient.") - current_symptoms: str = Field(description="A description of the current symptoms for the patient.") - current_symptoms_duration: str = Field(description="The time duration of current symptoms.") - pharmacy_location: str = Field(description="The patient's pharmacy location.") - request: str = Field( - description="Any necessary information the patient intake assistant should clarify before proceeding." - ) - -# Primary Assistant -class ToFindMedicationInfoAssistant(BaseModel): - """Transfers work to a specialized assistant to handle medication.""" - - request: str = Field( - description="Any necessary information the medication assistant should clarify before proceeding." - ) - - -class ToFindAppointmentInfoAssistant(BaseModel): - """Transfers work to a specialized assistant to handle appointment type suggestion and appointment bookings.""" - - start_date: datetime.date = Field(description="The start date of the search window for appointments.") - end_date: datetime.date = Field(description="The end date of the the search window for appointments.") - appointment_datetime: datetime.datetime = Field(description="The intended date and time of appointment the customer wants to book.") - appointment_type: ApptType = Field(description="The type of appointment.") - request: str = Field( - description="Any additional information or requests from the user regarding their symptoms that will help determine the type of appointment." - ) - - class Config: - schema_extra = { - "example": { - # "start_date": "2023-07-01", - # "end_date": "2023-07-05", - # "appointment_type": "Adult physicals", - "request": "I have chest pain when I exercise, I want to see someone.", - } - } - -# The top-level assistant performs general Q&A and delegates specialized tasks to other assistants. -# The task delegation is a simple form of semantic routing / does simple intent detection -primary_assistant_prompt = ChatPromptTemplate.from_messages( - [ - ( - "system", - "You are a helpful customer support assistant for healthcare patients and administrators. " - "Your primary role is to determine what the customer needs help with, whether they want to inquire about medication, make appointments, or register as a new patient. " - "If a customer requests to see the list of current medications for a patient, describe their symptoms for making an appointment, mention they want to register as a patient, " - "delegate the task to the appropriate specialized assistant by invoking the corresponding tool. You are not able to retrieve these information or make these types of changes yourself." - " Only the specialized assistants are given permission to do this for the user." - "The user is not aware of the different specialized assistants, so do not mention them; just quietly delegate through function calls. " - "Provide detailed information to the customer, and always double-check the database before concluding that information is unavailable. " - " When searching, be persistent. Expand your query bounds if the first search returns no results. " - " If a search comes up empty, expand your search before giving up." - "\nCurrent time: {time}.", - ), - ("placeholder", "{messages}"), - ] -).partial(time=datetime.datetime.now()) - -primary_assistant_tools = [ - #TavilySearchResults(max_results=1), -] - -assistant_runnable = primary_assistant_prompt | main_assistant_llm.bind_tools( - primary_assistant_tools - + [ - ToFindMedicationInfoAssistant, - ToFindAppointmentInfoAssistant, - ToPatientIntakeAssistant, - ] -) - -def create_entry_node(assistant_name: str, new_dialog_state: str) -> Callable: - def entry_node(state: State) -> dict: - tool_call_id = state["messages"][-1].tool_calls[0]["id"] - return { - "messages": [ - ToolMessage( - content=f"The assistant is now the {assistant_name}. Reflect on the above conversation between the host assistant and the user." - f" The user's intent is unsatisfied. Use the provided tools to assist the user. Remember, you are {assistant_name}," - " and the booking, update, other other action is not complete until after you have successfully invoked the appropriate tool." - " If the user changes their mind or needs help for other tasks, call the CompleteOrEscalate function to let the primary host assistant take control." - " Do not mention who you are - just act as the proxy for the assistant.", - tool_call_id=tool_call_id, - ) - ], - "dialog_state": new_dialog_state, - } - - return entry_node - - -builder = StateGraph(State) - -def handle_tool_error(state) -> dict: - error = state.get("error") - tool_calls = state["messages"][-1].tool_calls - return { - "messages": [ - ToolMessage( - content=f"Error: {repr(error)}\n please fix your mistakes.", - tool_call_id=tc["id"], - ) - for tc in tool_calls - ] - } - -def create_tool_node_with_fallback(tools: list) -> dict: - return ToolNode(tools).with_fallbacks( - [RunnableLambda(handle_tool_error)], exception_key="error" - ) - -# This node will be shared for exiting all specialized assistants -def pop_dialog_state(state: State) -> dict: - """Pop the dialog stack and return to the main assistant. - - This lets the full graph explicitly track the dialog flow and delegate control - to specific sub-graphs. - """ - messages = [] - if state["messages"][-1].tool_calls: - # Note: Doesn't currently handle the edge case where the llm performs parallel tool calls - messages.append( - ToolMessage( - content="Resuming dialog with the host assistant. Please reflect on the past conversation and assist the user as needed.", - tool_call_id=state["messages"][-1].tool_calls[0]["id"], - ) - ) - return { - "dialog_state": "pop", - "messages": messages, - } - -# Medication assistant -builder.add_node( - "enter_medication_assistant", - create_entry_node("Medication Finding and Info Assistant", "medication_assistant"), -) -builder.add_node("medication_assistant", Assistant(medication_runnable)) -builder.add_edge("enter_medication_assistant", "medication_assistant") -builder.add_node( - "medication_sensitive_tools", - create_tool_node_with_fallback(medication_sensitive_tools), -) -builder.add_node( - "medication_safe_tools", - create_tool_node_with_fallback(medication_safe_tools), -) - -def route_medication_assistant( - state: State, -) -> Literal[ - "medication_sensitive_tools", - "medication_safe_tools", - "leave_skill", - "__end__", -]: - route = tools_condition(state) - if route == END: - return END - tool_calls = state["messages"][-1].tool_calls - did_cancel = any(tc["name"] == CompleteOrEscalate.__name__ for tc in tool_calls) - if did_cancel: - return "leave_skill" - safe_toolnames = [t.name for t in medication_safe_tools] - if all(tc["name"] in safe_toolnames for tc in tool_calls): - return "medication_safe_tools" - return "medication_sensitive_tools" - -builder.add_edge("medication_sensitive_tools", "medication_assistant") -builder.add_edge("medication_safe_tools", "medication_assistant") -builder.add_conditional_edges("medication_assistant", route_medication_assistant) - -# Appointment assistant -builder.add_node( - "enter_appointment_assistant", - create_entry_node("Appointment Type and Scheduling Assistant", "appointment_assistant"), -) -builder.add_node("appointment_assistant", Assistant(appointment_runnable)) -builder.add_edge("enter_appointment_assistant", "appointment_assistant") -builder.add_node( - "appointment_safe_tools", - create_tool_node_with_fallback(appointment_safe_tools), -) -builder.add_node( - "appointment_sensitive_tools", - create_tool_node_with_fallback(appointment_sensitive_tools), -) -def route_appointment_assist( - state: State, -) -> Literal[ - "appointment_safe_tools", - "appointment_sensitive_tools", - "leave_skill", - "__end__", -]: - route = tools_condition(state) - if route == END: - return END - tool_calls = state["messages"][-1].tool_calls - did_cancel = any(tc["name"] == CompleteOrEscalate.__name__ for tc in tool_calls) - if did_cancel: - return "leave_skill" - safe_toolnames = [t.name for t in appointment_safe_tools] - if all(tc["name"] in safe_toolnames for tc in tool_calls): - return "appointment_safe_tools" - return "appointment_sensitive_tools" - -builder.add_edge("appointment_sensitive_tools", "appointment_assistant") -builder.add_edge("appointment_safe_tools", "appointment_assistant") -builder.add_conditional_edges("appointment_assistant", route_appointment_assist) - - - -# Patient intake assistant -builder.add_node( - "enter_patient_intake_assistant", - create_entry_node("Patient Intake Assistant", "patient_intake_assistant"), -) -builder.add_node("patient_intake_assistant", Assistant(patient_intake_runnable)) -builder.add_edge("enter_patient_intake_assistant", "patient_intake_assistant") -builder.add_node( - "patient_intake_sensitive_tools", - create_tool_node_with_fallback(patient_intake_sensitive_tools), -) -builder.add_node( - "patient_intake_safe_tools", - create_tool_node_with_fallback(patient_intake_safe_tools), -) - -def route_patient_intake_assistant( - state: State, -) -> Literal[ - "patient_intake_sensitive_tools", - "patient_intake_safe_tools", - "leave_skill", - "__end__", -]: - route = tools_condition(state) - if route == END: - return END - tool_calls = state["messages"][-1].tool_calls - did_cancel = any(tc["name"] == CompleteOrEscalate.__name__ for tc in tool_calls) - if did_cancel: - return "leave_skill" - safe_toolnames = [t.name for t in patient_intake_safe_tools] - if all(tc["name"] in safe_toolnames for tc in tool_calls): - return "patient_intake_safe_tools" - return "patient_intake_sensitive_tools" - -builder.add_edge("patient_intake_sensitive_tools", "patient_intake_assistant") -builder.add_edge("patient_intake_safe_tools", "patient_intake_assistant") -builder.add_conditional_edges("patient_intake_assistant", route_patient_intake_assistant) - - -# Primary assistant -builder.add_node("primary_assistant", Assistant(assistant_runnable)) -builder.add_node( - "primary_assistant_tools", create_tool_node_with_fallback(primary_assistant_tools) -) - -def route_primary_assistant( - state: State, -) -> Literal[ - "primary_assistant_tools", - "enter_medication_assistant", - "enter_appointment_assistant", - "__end__", -]: - route = tools_condition(state) - if route == END: - return END - tool_calls = state["messages"][-1].tool_calls - if tool_calls: - if tool_calls[0]["name"] == ToFindAppointmentInfoAssistant.__name__: - return "enter_appointment_assistant" - elif tool_calls[0]["name"] == ToFindMedicationInfoAssistant.__name__: - return "enter_medication_assistant" - elif tool_calls[0]["name"] == ToPatientIntakeAssistant.__name__: - return "enter_patient_intake_assistant" - return "primary_assistant_tools" - raise ValueError("Invalid route") - -# The assistant can route to one of the delegated assistants, -# directly use a tool, or directly respond to the user -builder.add_conditional_edges( - "primary_assistant", - route_primary_assistant, - { - "enter_medication_assistant": "enter_medication_assistant", - "enter_appointment_assistant": "enter_appointment_assistant", - "enter_patient_intake_assistant": "enter_patient_intake_assistant", - "primary_assistant_tools": "primary_assistant_tools", - END: END, - }, -) -builder.add_edge("primary_assistant_tools", "primary_assistant") - -builder.add_node("leave_skill", pop_dialog_state) -builder.add_edge("leave_skill", "primary_assistant") - -# Each delegated workflow can directly respond to the user -# When the user responds, we want to return to the currently active workflow -def route_to_workflow( - state: State, -) -> Literal[ - "primary_assistant", - "medication_assistant", - "appointment_assistant", - "patient_intake_assistant", -]: - """If we are in a delegated state, route directly to the appropriate assistant.""" - dialog_state = state.get("dialog_state") - if not dialog_state: - return "primary_assistant" - return dialog_state[-1] - -#builder.add_conditional_edges("fetch_user_info", route_to_workflow) - -builder.add_conditional_edges(START, route_to_workflow) -# Compile graph -memory = MemorySaver() -full_graph = builder.compile( - checkpointer=memory, - # To enable interrupts before sensitive tools and - # let the user approve or deny the use of sensitive tools, - # enable this interrupt_before param and also check for snapshot.next - # as demonstrated by https://langchain-ai.github.io/langgraph/tutorials/customer-support/customer-support - #interrupt_before=[ - # "medication_sensitive_tools", - # "appointment_sensitive_tools", - #], -) - -if save_graph_to_png: - - with open("/graph_images/appgraph.png", "wb") as png: - png.write(full_graph.get_graph(xray=True).draw_mermaid_png()) -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7860, - help = "Specify the port number for the simple voice UI to run at.") - - args = parser.parse_args() - server_port = args.port - launch_demo_ui(full_graph, server_port, NVIDIA_API_KEY, RIVA_ASR_FUNCTION_ID, RIVA_TTS_FUNCTION_ID, RIVA_API_URI) - - - - diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_appointment_making_only.py b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_appointment_making_only.py deleted file mode 100644 index 150367f46..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_appointment_making_only.py +++ /dev/null @@ -1,261 +0,0 @@ -from dotenv import load_dotenv -import os - -from langchain_nvidia_ai_endpoints import ChatNVIDIA - - -from typing import Optional -import sqlite3 -import pandas as pd - -import datetime - -from enum import Enum - -import shutil - -from langchain_core.tools import tool -from langchain_core.prompts import ChatPromptTemplate -from langchain_core.runnables import Runnable, RunnableConfig -from langchain_core.runnables import RunnableLambda -from langchain_core.messages import ToolMessage - - -from langgraph.graph.message import AnyMessage, add_messages -from langgraph.prebuilt import ToolNode, tools_condition -from langgraph.checkpoint.memory import MemorySaver -from langgraph.graph import END, StateGraph, START - -import sys -from typing import Any, Dict, List, Tuple, Literal, Callable, Annotated, Literal, Optional -from typing_extensions import TypedDict - -import argparse - -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(os.path.dirname(SCRIPT_DIR)) -from utils.ui import launch_demo_ui - -################# -### variables ### -################# -patient_id = '14867dba-fb11-4df3-9829-8e8e081b39e6' # test patient id from looking through https://launch.smarthealthit.org/ -save_graph_to_png = True - -env_var_file = "vars.env" - -local_file_constant = "sample_db/test_db.sqlite" -local_file_current = "sample_db/test_db_tmp_copy.sqlite" - - - -######################### -### get env variables ### -######################### -load_dotenv(env_var_file) # This line brings all environment variables from vars.env into os.environ -print("Your NVIDIA_API_KEY is set to: ", os.environ['NVIDIA_API_KEY']) - -assert os.environ['NVIDIA_API_KEY'] is not None, "Make sure you have your NVIDIA_API_KEY exported as a environment variable!" - -NVIDIA_API_KEY=os.getenv("NVIDIA_API_KEY", None) -RIVA_API_URI = os.getenv("RIVA_API_URI", None) -RIVA_ASR_FUNCTION_ID = os.getenv("RIVA_ASR_FUNCTION_ID", None) -RIVA_TTS_FUNCTION_ID = os.getenv("RIVA_TTS_FUNCTION_ID", None) - -assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!" -llm_model = os.getenv("LLM_MODEL", None) - -assert os.environ['BASE_URL'] is not None, "Make sure you have your BASE_URL exported as a environment variable!" -base_url = os.getenv("BASE_URL", None) - -### define which llm to use -assistant_llm = ChatNVIDIA(model=llm_model, base_url=base_url) - -######################## -### Define the tools ### -######################## - -class ApptType(Enum): - adult_physicals = "Adult physicals" - pediatric_physicals = "Pediatric physicals" - follow_up_appointments = "Follow-up appointments" - sick_visits = "Sick visits" - flu_shots = "Flu shots" - other_vaccinations = "Other vaccinations" - allergy_shots = "Allergy shots" - b12_injections = "B12 injections" - diabetes_management = "Diabetes management" - hypertension_management = "Hypertension management" - asthma_management = "Asthma management" - chronic_pain_management = "Chronic pain management" - initial_mental_health = "Initial mental health consultations" - follow_up_mental_health = "Follow-up mental health appointments" - therapy_session = "Therapy sessions" - blood_draws = "Blood draws" - urine_tests = "Urine tests" - ekgs = "EKGs" - biopsies = "Biopsies" - medication_management = "Medication management" - wart_removals = "Wart removals" - skin_tag_removals = "Skin tag removals" - ear_wax_removals = "Ear wax removals" - -@tool -def find_available_appointments( - appointment_type: ApptType, - start_date: Optional[datetime.date] = None, - end_date: Optional[datetime.date] = None, -) -> list: - """Look for available new appointments.""" - - - shutil.copyfile(local_file_constant, local_file_current) - - conn = sqlite3.connect(local_file_current) - - query_datetime = "SELECT * from appointment_schedule WHERE appointment_type = \"{}\" AND patient IS NULL ".format(appointment_type.value) - - - #start_date, end_date = datetime.date.fromisoformat(start_date), datetime.date.fromisoformat(end_date) - if start_date: - query_datetime += " AND datetime >= \"{}\"".format(start_date) - - if end_date: - query_datetime += " AND datetime <= \"{}\"".format(end_date + datetime.timedelta(hours=24) ) - print(query_datetime) - - available_appts = pd.read_sql( - query_datetime, conn - ) - - print(available_appts) - conn.close() - return list(available_appts['datetime']) - -@tool -def book_appointment( - appointment_datetime: datetime.datetime, - appointment_type: ApptType, -)-> pd.DataFrame: - """Book new appointments.""" - - conn = sqlite3.connect(local_file_current) - - booking_query = "UPDATE appointment_schedule SET patient = \"current_patient\" WHERE datetime = \"{}\" AND appointment_type = \"{}\"".format(appointment_datetime, appointment_type.value) - - cur = conn.cursor() - cur.execute(booking_query) - - find_patient_appt_query = "SELECT * from appointment_schedule where patient = \"current_patient\"" - - current_patient_entries = pd.read_sql(find_patient_appt_query, conn, index_col="index") - - cur.close() - conn.close() - - - return current_patient_entries - - -class State(TypedDict): - messages: Annotated[list[AnyMessage], add_messages] -class Assistant: - def __init__(self, runnable: Runnable): - self.runnable = runnable - - def __call__(self, state: State, config: RunnableConfig): - while True: - result = self.runnable.invoke(state) - - if not result.tool_calls and ( - not result.content - or isinstance(result.content, list) - and not result.content[0].get("text") - ): - messages = state["messages"] + [("user", "Respond with a real output.")] - state = {**state, "messages": messages} - messages = state["messages"] + [("user", "Respond with a real output.")] - state = {**state, "messages": messages} - else: - break - return {"messages": result} - -with open('/app/graph_definitions/system_prompts/appointment_system_prompt.txt', 'r') as file: - guidelines_for_scheduling_appointment = file.read() - -appointment_prompt = ChatPromptTemplate.from_messages( - [ - ( - "system", - "You are a helpful customer support assistant for healthcare appointment scheduling. Your purpose is to help patients with looking up and making appointments." - "Use the provided tools as necessary." - "\nCurrent date and time: {time}." - "\nGuidelines for scheduling appointments: {guidelines_for_scheduling_appointment}." - ), - ("placeholder", "{messages}"), - ] -).partial(time=datetime.datetime.now(), guidelines_for_scheduling_appointment=guidelines_for_scheduling_appointment) - -appointment_tools = [find_available_appointments, book_appointment] -# appointment_assistant_runnable = appointment_assistant_prompt | llm -appointment_runnable = appointment_prompt | assistant_llm.bind_tools( - appointment_tools -) - - - - -builder = StateGraph(State) - -def handle_tool_error(state) -> dict: - error = state.get("error") - tool_calls = state["messages"][-1].tool_calls - return { - "messages": [ - ToolMessage( - content=f"Error: {repr(error)}\n please fix your mistakes.", - tool_call_id=tc["id"], - ) - for tc in tool_calls - ] - } - -def create_tool_node_with_fallback(tools: list) -> dict: - return ToolNode(tools).with_fallbacks( - [RunnableLambda(handle_tool_error)], exception_key="error" - ) - -# Appointment assistant -# Define nodes: these do the work -builder.add_node("appointment_assistant", Assistant(appointment_runnable)) -builder.add_node("tools", create_tool_node_with_fallback(appointment_tools)) -# Define edges: these determine how the control flow moves -builder.add_edge(START, "appointment_assistant") -builder.add_conditional_edges( - "appointment_assistant", - tools_condition, -) -builder.add_edge("tools", "appointment_assistant") - -# Compile graph -memory = MemorySaver() -appt_graph = builder.compile( - checkpointer=memory, -) - -if save_graph_to_png: - - with open("/graph_images/appgraph_appointment.png", "wb") as png: - png.write(appt_graph.get_graph(xray=True).draw_mermaid_png()) - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7860, - help = "Specify the port number for the simple voice UI to run at.") - - args = parser.parse_args() - server_port = args.port - launch_demo_ui(appt_graph, server_port, NVIDIA_API_KEY, RIVA_ASR_FUNCTION_ID, RIVA_TTS_FUNCTION_ID, RIVA_API_URI) - - - diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph.png b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph.png deleted file mode 100644 index 2fd24c71f..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_appointment.png b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_appointment.png deleted file mode 100644 index 97d3c9ff7..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_appointment.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_medication_lookup.png b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_medication_lookup.png deleted file mode 100644 index 45396c12a..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_medication_lookup.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_patient_intake.png b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_patient_intake.png deleted file mode 100644 index 4b4985f3d..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_images/appgraph_patient_intake.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_medication_lookup_only.py b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_medication_lookup_only.py deleted file mode 100644 index f87a79a8b..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_medication_lookup_only.py +++ /dev/null @@ -1,225 +0,0 @@ -from dotenv import load_dotenv -import os - -from langchain_nvidia_ai_endpoints import ChatNVIDIA - -from fhirclient import client -from fhirclient.models.patient import Patient -from fhirclient.models.medication import Medication -from fhirclient.models.medicationrequest import MedicationRequest -from fhirclient.models.appointment import Appointment - -from typing import Optional -import sqlite3 -import pandas as pd - -import datetime - -from enum import Enum - -import shutil - -from langchain_community.tools.tavily_search import TavilySearchResults - -from langchain_core.tools import tool -from langchain_core.prompts import ChatPromptTemplate -from langchain_core.pydantic_v1 import BaseModel, Field -from langchain_core.runnables import Runnable, RunnableConfig -from langchain_core.runnables import RunnableLambda -from langchain_core.messages import ToolMessage - - -from langgraph.graph.message import AnyMessage, add_messages -from langgraph.prebuilt import ToolNode, tools_condition -from langgraph.checkpoint.memory import MemorySaver -from langgraph.graph import END, StateGraph, START - -import sys -from typing import Any, Dict, List, Tuple, Literal, Callable, Annotated, Literal, Optional -from typing_extensions import TypedDict - -import argparse - -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(os.path.dirname(SCRIPT_DIR)) -from utils.ui import launch_demo_ui - -################# -### variables ### -################# -patient_id = '14867dba-fb11-4df3-9829-8e8e081b39e6' # test patient id from looking through https://launch.smarthealthit.org/ -save_graph_to_png = True -env_var_file = "vars.env" - - -######################### -### get env variables ### -######################### -load_dotenv(env_var_file) # This line brings all environment variables from vars.env into os.environ -print("Your NVIDIA_API_KEY is set to: ", os.environ['NVIDIA_API_KEY']) -print("Your TAVILY_API_KEY is set to: ", os.environ['TAVILY_API_KEY']) - -assert os.environ['NVIDIA_API_KEY'] is not None, "Make sure you have your NVIDIA_API_KEY exported as a environment variable!" -assert os.environ['TAVILY_API_KEY'] is not None, "Make sure you have your TAVILY_API_KEY exported as a environment variable!" -NVIDIA_API_KEY=os.getenv("NVIDIA_API_KEY", None) -RIVA_API_URI = os.getenv("RIVA_API_URI", None) - -RIVA_ASR_FUNCTION_ID = os.getenv("RIVA_ASR_FUNCTION_ID", None) -RIVA_TTS_FUNCTION_ID = os.getenv("RIVA_TTS_FUNCTION_ID", None) - -assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!" -llm_model = os.getenv("LLM_MODEL", None) - -assert os.environ['BASE_URL'] is not None, "Make sure you have your BASE_URL exported as a environment variable!" -base_url = os.getenv("BASE_URL", None) - -### define which llm to use -assistant_llm = ChatNVIDIA(model=llm_model, base_url=base_url) - -######################## -### Define the tools ### -######################## - -settings = { - 'app_id': 'my_web_app', - 'api_base': 'https://r4.smarthealthit.org' -} - -smart = client.FHIRClient(settings=settings) - -@tool -def get_patient_dob() -> str: - """Retrieve the patient's date of birth.""" - patient = Patient.read(patient_id, smart.server) - return patient.birthDate.isostring - -@tool -def get_patient_medications() -> list: - """Retrieve the patient's list of medications.""" - def _med_name(med): - if med.coding: - name = next((coding.display for coding in med.coding if coding.system == 'http://www.nlm.nih.gov/research/umls/rxnorm'), None) - if name: - return name - if med.text and med.text: - return med.text - return "Unnamed Medication(TM)" - - def _get_medication_by_ref(ref, smart): - med_id = ref.split("/")[1] - return Medication.read(med_id, smart.server).code - - def _get_med_name(prescription, client=None): - if prescription.medicationCodeableConcept is not None: - med = prescription.medicationCodeableConcept - return _med_name(med) - elif prescription.medicationReference is not None and client is not None: - med = _get_medication_by_ref(prescription.medicationReference.reference, client) - return _med_name(med) - else: - return 'Error: medication not found' - - # test patient id from looking through https://launch.smarthealthit.org/ - bundle = MedicationRequest.where({'patient': patient_id}).perform(smart.server) - prescriptions = [be.resource for be in bundle.entry] if bundle is not None and bundle.entry is not None else None - - return [_get_med_name(p, smart) for p in prescriptions] - - -medication_instruction_search_tool = TavilySearchResults( - description="Search online for instructions related the patient's requested medication. Do not use to give medical advice." -) - -class State(TypedDict): - messages: Annotated[list[AnyMessage], add_messages] - -class Assistant: - def __init__(self, runnable: Runnable): - self.runnable = runnable - - def __call__(self, state: State, config: RunnableConfig): - while True: - result = self.runnable.invoke(state) - - if not result.tool_calls and ( - not result.content - or isinstance(result.content, list) - and not result.content[0].get("text") - ): - messages = state["messages"] + [("user", "Respond with a real output.")] - state = {**state, "messages": messages} - messages = state["messages"] + [("user", "Respond with a real output.")] - state = {**state, "messages": messages} - else: - break - return {"messages": result} - -# Medication assistant -with open('/app/graph_definitions/system_prompts/medication_lookup_system_prompt.txt', 'r') as file: - prompt = file.read() -medication_prompt = ChatPromptTemplate.from_messages( - [ - ("system", prompt), - ("placeholder", "{messages}"), - ] -) - -medication_tools = [get_patient_medications, get_patient_dob, medication_instruction_search_tool] # to add later -medication_runnable = medication_prompt | assistant_llm.bind_tools(medication_tools) - - - -builder = StateGraph(State) - -def handle_tool_error(state) -> dict: - error = state.get("error") - tool_calls = state["messages"][-1].tool_calls - return { - "messages": [ - ToolMessage( - content=f"Error: {repr(error)}\n please fix your mistakes.", - tool_call_id=tc["id"], - ) - for tc in tool_calls - ] - } - -def create_tool_node_with_fallback(tools: list) -> dict: - return ToolNode(tools).with_fallbacks( - [RunnableLambda(handle_tool_error)], exception_key="error" - ) - - -# Medication assistant -builder.add_node("medication_assistant", Assistant(medication_runnable)) -builder.add_node("tools", create_tool_node_with_fallback(medication_tools)) - - -# Define edges: these determine how the control flow moves -builder.add_edge(START, "medication_assistant") -builder.add_conditional_edges( - "medication_assistant", - tools_condition, -) -builder.add_edge("tools", "medication_assistant") - - -memory = MemorySaver() -medication_lookup_graph = builder.compile(checkpointer=memory) - -if save_graph_to_png: - - with open("/graph_images/appgraph_medication_lookup.png", "wb") as png: - png.write(medication_lookup_graph.get_graph(xray=True).draw_mermaid_png()) -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7860, - help = "Specify the port number for the simple voice UI to run at.") - - args = parser.parse_args() - server_port = args.port - launch_demo_ui(medication_lookup_graph, server_port, NVIDIA_API_KEY, RIVA_ASR_FUNCTION_ID, RIVA_TTS_FUNCTION_ID, RIVA_API_URI) - - - - diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_patient_intake_only.py b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_patient_intake_only.py deleted file mode 100644 index 07701c679..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/graph_patient_intake_only.py +++ /dev/null @@ -1,190 +0,0 @@ -from dotenv import load_dotenv -import os - -from langchain_nvidia_ai_endpoints import ChatNVIDIA - - -import datetime - -from enum import Enum - -from langchain_core.tools import tool -from langchain_core.prompts import ChatPromptTemplate -from langchain_core.runnables import Runnable, RunnableConfig -from langchain_core.runnables import RunnableLambda -from langchain_core.messages import ToolMessage - - -from langgraph.graph.message import AnyMessage, add_messages -from langgraph.prebuilt import ToolNode, tools_condition -from langgraph.checkpoint.memory import MemorySaver -from langgraph.graph import END, StateGraph, START - -import sys -from typing import Any, Dict, List, Tuple, Literal, Callable, Annotated, Literal, Optional -from typing_extensions import TypedDict - -import argparse - - -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(os.path.dirname(SCRIPT_DIR)) -from utils.ui import launch_demo_ui - -################# -### variables ### -################# -patient_id = '14867dba-fb11-4df3-9829-8e8e081b39e6' # test patient id from looking through https://launch.smarthealthit.org/ -save_graph_to_png = True - -env_var_file = "vars.env" - - -######################### -### get env variables ### -######################### -load_dotenv(env_var_file) # This line brings all environment variables from vars.env into os.environ -print("Your NVIDIA_API_KEY is set to: ", os.environ['NVIDIA_API_KEY']) - -assert os.environ['NVIDIA_API_KEY'] is not None, "Make sure you have your NVIDIA_API_KEY exported as a environment variable!" -NVIDIA_API_KEY=os.getenv("NVIDIA_API_KEY", None) - -RIVA_API_URI = os.getenv("RIVA_API_URI", None) -RIVA_ASR_FUNCTION_ID = os.getenv("RIVA_ASR_FUNCTION_ID", None) -RIVA_TTS_FUNCTION_ID = os.getenv("RIVA_TTS_FUNCTION_ID", None) - -assert os.environ['LLM_MODEL'] is not None, "Make sure you have your LLM_MODEL exported as a environment variable!" -llm_model = os.getenv("LLM_MODEL", None) - -assert os.environ['BASE_URL'] is not None, "Make sure you have your BASE_URL exported as a environment variable!" -base_url = os.getenv("BASE_URL", None) - -### define which llm to use -assistant_llm = ChatNVIDIA(model=llm_model, base_url=base_url) - -######################## -### Define the tools ### -######################## -# In this tool we illustrate how you can define -# the different data fields that are needed for -# patient intake and the agentic llm will gather each field. -# Here we are only printing each of the fields for illustration -# of the tool, however in your own use case, you would likely want -# to make API calls to transmit the gathered data fields back -# to your own database. -@tool -def print_gathered_patient_info( - patient_name: str, - patient_dob: datetime.date, - allergies_medication: List[str], - current_symptoms: str, - current_symptoms_duration: str, - pharmacy_location: str -): - """This function prints out and transmits the gathered information for each patient intake field: - patient_name is the patient name, - patient_dob is the patient date of birth, - allergies_medication is a list of allergies in medication for the patient, - current_symptoms is a description of the current symptoms for the patient, - current_symptoms_duration is the time duration of current symptoms, - pharmacy_location is the patient pharmacy location. """ - - print(patient_name) - print(patient_dob) - print(allergies_medication) - print(current_symptoms) - print(current_symptoms_duration) - print(pharmacy_location) - -class State(TypedDict): - messages: Annotated[list[AnyMessage], add_messages] - -class Assistant: - def __init__(self, runnable: Runnable): - self.runnable = runnable - - def __call__(self, state: State, config: RunnableConfig): - while True: - result = self.runnable.invoke(state) - - if not result.tool_calls and ( - not result.content - or isinstance(result.content, list) - and not result.content[0].get("text") - ): - messages = state["messages"] + [("user", "Respond with a real output.")] - state = {**state, "messages": messages} - messages = state["messages"] + [("user", "Respond with a real output.")] - state = {**state, "messages": messages} - else: - break - return {"messages": result} - - -# Patient Intake assistant -with open('/app/graph_definitions/system_prompts/patient_intake_system_prompt.txt', 'r') as file: - prompt = file.read() - -patient_intake_prompt = ChatPromptTemplate.from_messages( - [ - ("system", prompt), - ("placeholder", "{messages}"), - ] -) - -patient_intake_tools = [print_gathered_patient_info] -patient_intake_runnable = patient_intake_prompt | assistant_llm.bind_tools(patient_intake_tools) - - -builder = StateGraph(State) - -def handle_tool_error(state) -> dict: - error = state.get("error") - tool_calls = state["messages"][-1].tool_calls - return { - "messages": [ - ToolMessage( - content=f"Error: {repr(error)}\n please fix your mistakes.", - tool_call_id=tc["id"], - ) - for tc in tool_calls - ] - } - -def create_tool_node_with_fallback(tools: list) -> dict: - return ToolNode(tools).with_fallbacks( - [RunnableLambda(handle_tool_error)], exception_key="error" - ) - -# Define nodes: these do the work -builder.add_node("patient_intake_assistant", Assistant(patient_intake_runnable)) -builder.add_node("tools", create_tool_node_with_fallback(patient_intake_tools)) -# Define edges: these determine how the control flow moves -builder.add_edge(START, "patient_intake_assistant") -builder.add_conditional_edges( - "patient_intake_assistant", - tools_condition, -) -builder.add_edge("tools", "patient_intake_assistant") - - -# The checkpointer lets the graph persist its state -# this is a complete memory for the entire graph. -memory = MemorySaver() -intake_graph = builder.compile(checkpointer=memory) - -if save_graph_to_png: - - with open("/graph_images/appgraph_patient_intake.png", "wb") as png: - png.write(intake_graph.get_graph(xray=True).draw_mermaid_png()) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7860, - help = "Specify the port number for the simple voice UI to run at.") - - args = parser.parse_args() - server_port = args.port - launch_demo_ui(intake_graph, server_port, NVIDIA_API_KEY, RIVA_ASR_FUNCTION_ID, RIVA_TTS_FUNCTION_ID, RIVA_API_URI) - \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/appointment_system_prompt.txt b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/appointment_system_prompt.txt deleted file mode 100644 index 820e744da..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/appointment_system_prompt.txt +++ /dev/null @@ -1,49 +0,0 @@ -Ask questions one at a time and keep asking until you gather all the necessary information to schedule the appointment. -Here are some guidelines for asking patients for information to schedule/make an appointment: - -As a first step, introduce yourself as the appointment scheduling assistant (e.g. "Welcome! I'm your appointment scheduling assistant!"). - -As the second step, determine the type of appointment: -Ask the patient to specify the type of appointment they need to schedule (e.g. "Please tell me what type of appointment you would like to make? Are you looking to schedule a flu shot, a physical, a therapy session, or something else?"). The available Appointment Type and Suggested Duration are: - - Adult physicals: 30 to 60 minutes - Pediatric physicals: 30 to 60 minutes - Follow-up appointments: 15 to 30 minutes - Sick visits: 15 to 30 minutes - Flu shots: 15 minutes - Other vaccinations: 15 to 30 minutes - Allergy shots: 30 minutes - B12 injections: 15 minutes - Diabetes management: 30 to 60 minutes - Hypertension management: 30 to 60 minutes - Asthma management: 30 to 60 minutes - Chronic pain management: 60 minutes - Initial mental health consultations: 60 minutes - Follow-up mental health appointments: 30 to 60 minutes - Therapy sessions: 45 to 60 minutes - Blood draws: 15 minutes - Urine tests: 15 minutes - EKGs: 30 minutes - Biopsies: 60 to 90 minutes - Medication management: 15 to 30 minutes - Wart removals: 30 minutes - Skin tag removals: 30 minutes - Ear wax removals: 30 minutes -Note: These are general guidelines and may vary depending on the specific needs of the patient and the provider. -Provide guidance on standard appointment lengths. Wait for the user to reply to you at the current step before moving on to the next step. - -As the third step, determine the availability of the patient's desired appointment type: -Ask the patient what the earliest and latest dates are for a potential appointment. Search for the available appointments based on the patient's desired data range. Provide the options to the patient in natural language sentences, not lists or bullet points. Then ask the patient to choose one option among the available appointment times from the search. Wait for the user to choose an option at the current step before moving on to the next step. - -As the fourth step, confirm the chosen appointment time with the patient: -Confirm the appointment details with the patient. Provide instructions on next steps (e.g. "We will send you a confirmation email with the appointment details. Please arrive 15 minutes prior to your scheduled time.")." - -As the fifth step, provide a word of reassurance to the patient and wish them good health (e.g. "I wish you good health in the future!") - -After you're done assisting one patient with making their appointment, you should be ready to assist the next patient, forget your previous conversation and start again from step one. - -Be empathetic, encouraging and friendly with the patient in your conversation. - -If the user says they want to start over, forget everything the user has told you, including their name, and start again from the first step with welcoming them (e.g. "Welcome! I'm your appointment scheduling assistant!"), then second step, and onwards. - -Do not include any special characters, lists, or bullet points in your response. Return date and time in the spoken format, for example "12/08/2024 at 08:00 AM" should be returned as "December 8th 2024 at 8 AM". \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/medication_lookup_system_prompt.txt b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/medication_lookup_system_prompt.txt deleted file mode 100644 index 28d25b15c..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/medication_lookup_system_prompt.txt +++ /dev/null @@ -1,3 +0,0 @@ -You are a specialized assistant for handling a patient's medications and information related to the patient's medication. Retrieve and confirm the medication details with the customer, do not edit or modify the retrieved medication details. Remember that the task isn't completed until after the relevant information asked by the user has been retrieved. -When providing your answers, provide them in natural language sentences, not lists or bullet points. -Do not waste the user's time. Do not make up invalid tools or functions. diff --git a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/patient_intake_system_prompt.txt b/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/patient_intake_system_prompt.txt deleted file mode 100644 index b7b7a0c55..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/graph_definitions/system_prompts/patient_intake_system_prompt.txt +++ /dev/null @@ -1,22 +0,0 @@ -You are a specialized assistant for handling patient intake, registering a new patient with all required information fields, or verifying an existing patient's information for all required information fields. -There are three steps you need to take. -Firstly, start the conversation by welcoming the patient to the patient intake agent (e.g. "Welcome to our clinic! I'm so glad you're here. I’m the patient intake assistant and we're going to do our best to help you feel better. Can you please tell me a little bit about what brings you in today?"). Give the patient a chance to respond before moving on. -Secondly, iterate through each of the following fields in the list and ask for the patient's information in each field when performing patient intake, ask one field at a time: -Patient Name -Patient Date of Birth -Current symptoms -The time duration of current symptoms -Current medications that the patient is taking -Allergies in medication -Patient pharmacy location - -Be kind, welcoming, empathetic, and cheerful in your tone when intaking a patient. -Remember that the task isn't completed until after the information for each of the fields has been asked and retrieved. -Thirdly, after you have all the necessary infomation from the patient, confirm the information with the patient in natural language sentences, do not use lists or bullet points. - -After confirmation, in the background, print and transmit the gathered information without telling the patient the details, just let them know the patient intake information has been saved, and wish them good health. -Use the provided tool as necessary. -Do not ask unnecessary questions that are not on the list above. Do not make up invalid tools or functions. - -If the user says they want to start over, forget everything the user has told you, including their name, and start again from the first step with welcoming them (e.g. "Welcome to our clinic! I'm so glad you're here. I’m the patient intake assistant and we're going to do our best to help you feel better. Can you please tell me a little bit about what brings you in today?"), then second step, then third step. -Do not include any special characters in your response. diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/ACE_diagram.png b/industries/healthcare/agentic-healthcare-front-desk/images/ACE_diagram.png deleted file mode 100644 index 6c6359304..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/ACE_diagram.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/ace_ferret_avatar.png b/industries/healthcare/agentic-healthcare-front-desk/images/ace_ferret_avatar.png deleted file mode 100644 index 526e772d1..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/ace_ferret_avatar.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/api-catalog-generate-api-key.png b/industries/healthcare/agentic-healthcare-front-desk/images/api-catalog-generate-api-key.png deleted file mode 100644 index c4332dc54..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/api-catalog-generate-api-key.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/architecture_diagram.png b/industries/healthcare/agentic-healthcare-front-desk/images/architecture_diagram.png deleted file mode 100644 index 57e388834..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/architecture_diagram.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/example_ui.png b/industries/healthcare/agentic-healthcare-front-desk/images/example_ui.png deleted file mode 100644 index fa1b9eade..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/example_ui.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/key-generated.png b/industries/healthcare/agentic-healthcare-front-desk/images/key-generated.png deleted file mode 100644 index e1f204d90..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/key-generated.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/llama31-70b-instruct-get-api-key.png b/industries/healthcare/agentic-healthcare-front-desk/images/llama31-70b-instruct-get-api-key.png deleted file mode 100644 index 8ddf36229..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/llama31-70b-instruct-get-api-key.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/llama31-70b-instruct-model-card.png b/industries/healthcare/agentic-healthcare-front-desk/images/llama31-70b-instruct-model-card.png deleted file mode 100644 index ca02ed1c5..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/llama31-70b-instruct-model-card.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/images/repo_overview_structure_diagram.png b/industries/healthcare/agentic-healthcare-front-desk/images/repo_overview_structure_diagram.png deleted file mode 100644 index 6be0e1ffe..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/images/repo_overview_structure_diagram.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/requirements.txt b/industries/healthcare/agentic-healthcare-front-desk/requirements.txt deleted file mode 100644 index ec1aeb318..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/requirements.txt +++ /dev/null @@ -1,17 +0,0 @@ -langchain==0.3.7 -langgraph==0.2.50 -langchain-community==0.3.7 -langchain_nvidia_ai_endpoints==0.3.5 -gradio==4.43.0 -nvidia-riva-client==2.14.0 -pydantic==2.8.2 -fastapi==0.112.1 -annoy==1.17.3 -PyStemmer==2.2.0.1 -pydantic_core==2.20.1 -tavily-python==0.4.0 -pandas==2.2.2 -fhirclient==4.2.0 -python-dotenv==1.0.1 -pycountry==23.12.11 -bleach==6.1.0 \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/sample_db/generate_test_sqlite.ipynb b/industries/healthcare/agentic-healthcare-front-desk/sample_db/generate_test_sqlite.ipynb deleted file mode 100644 index e0bf5b06e..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/sample_db/generate_test_sqlite.ipynb +++ /dev/null @@ -1,124 +0,0 @@ -{ - "cells": [ - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# This notebook is for generating the sample sqlite database filled with available appointment times.\n", - "# This will be the sample test database that the appointment assistant can connect to.\n", - "# The interaction between the appointment assistant's tools and the sample database \n", - "# is for illustration purposes only, to illustrate how we can interact with any type of database / API via API valls.\n", - "\n", - "import sqlite3\n", - "\n", - "import pandas as pd\n", - "\n", - "import numpy as np\n", - "\n", - "def generate_random_pd_df(possible_providers, days = 60, start_hour = 8, amount_work_hours = 9):\n", - " start_date = np.datetime64('today')\n", - " end_date = np.datetime64('today') + days\n", - "\n", - " list_dates = np.arange(start_date, end_date)\n", - " \n", - " \n", - " list_datetimes = [np.datetime64(str(d)+\" \"+\"08:00:00\")+ 3600 * np.random.choice(np.arange(0, amount_work_hours)) for d in list_dates]\n", - "\n", - " list_doctors = np.random.choice(possible_providers, len(list_dates))\n", - " \n", - " return list_datetimes, list(list_doctors)\n", - "\n", - "\n", - "possible_providers = [\"Doctor A\", \"Doctor B\", \"Doctor C\", \"Doctor D\"]\n", - "possible_appt_types = [\"Adult physicals\",\n", - " \"Pediatric physicals\",\n", - " \"Follow-up appointments\",\n", - " \"Sick visits\",\n", - " \"Flu shots\",\n", - " \"Other vaccinations\",\n", - " \"Allergy shots\",\n", - " \"B12 injections\",\n", - " \"Diabetes management\",\n", - " \"Hypertension management\",\n", - " \"Asthma management\",\n", - " \"Chronic pain management\",\n", - " \"Initial mental health consultations\",\n", - " \"Follow-up mental health appointments\",\n", - " \"Therapy sessions\",\n", - " \"Blood draws\",\n", - " \"Urine tests\",\n", - " \"EKGs\",\n", - " \"Biopsies\",\n", - " \"Medication management\",\n", - " \"Wart removals\",\n", - " \"Skin tag removals\",\n", - " \"Ear wax removals\"\n", - " ]\n", - "\n", - "\n", - "\n", - "num_days=365\n", - "\n", - "\n", - "datetime, doctor, appointment_type, patient = [], [], [], []\n", - "for type in possible_appt_types:\n", - " list_datetimes, list_doctors = generate_random_pd_df(days = num_days, possible_providers = possible_providers)\n", - " datetime.extend(list_datetimes)\n", - " doctor.extend(list_doctors)\n", - " appointment_type.extend(list(np.repeat(type, num_days)))\n", - " patient.extend(list(np.repeat(None, num_days)))\n", - "\n", - "print(datetime)\n", - "print(doctor)\n", - "print(appointment_type)\n", - "print(patient)\n", - "\n", - "dict = {'datetime': datetime, 'doctor': doctor, \"appointment_type\": appointment_type, \"patient\": patient} \n", - " \n", - "df = pd.DataFrame(dict)\n", - "df[\"datetime\"] = pd.to_datetime(df[\"datetime\"])\n", - " \n", - "print(df) \n" - ] - }, - { - "cell_type": "code", - "execution_count": 2, - "metadata": {}, - "outputs": [], - "source": [ - "import sqlite3\n", - "local_file = \"test_db_testing.sqlite\"\n", - "conn = sqlite3.connect(local_file)\n", - "cursor = conn.cursor()\n", - "\n", - "df.to_sql(\"appointment_schedule\", conn, if_exists=\"replace\")\n", - "conn.commit()\n", - "conn.close()" - ] - } - ], - "metadata": { - "kernelspec": { - "display_name": ".venv", - "language": "python", - "name": "python3" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.10.14" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} diff --git a/industries/healthcare/agentic-healthcare-front-desk/sample_db/test_db.sqlite b/industries/healthcare/agentic-healthcare-front-desk/sample_db/test_db.sqlite deleted file mode 100644 index 3281c9063..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/sample_db/test_db.sqlite and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/__init__.py b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/__init__.py deleted file mode 100644 index d50bcc445..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: Apache-2.0 -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/css.py b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/css.py deleted file mode 100644 index 281f7776b..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/css.py +++ /dev/null @@ -1,39 +0,0 @@ -import os - -import gradio as gr - -bot_title = os.getenv("BOT_TITLE", "NVIDIA Inference Microservice") - -header = f""" - -{bot_title} - -""" - -with open("ui_assets/css/style.css", "r") as file: - css = file.read() - -theme = gr.themes.Monochrome(primary_hue="emerald", secondary_hue="green", font=["nvidia-sans", "sans-serif"]).set( - button_primary_background_fill="#76B900", - button_primary_background_fill_dark="#76B900", - button_primary_background_fill_hover="#569700", - button_primary_background_fill_hover_dark="#569700", - button_primary_text_color="#000000", - button_primary_text_color_dark="#ffffff", - button_secondary_background_fill="#76B900", - button_secondary_background_fill_dark="#76B900", - button_secondary_background_fill_hover="#569700", - button_secondary_background_fill_hover_dark="#569700", - button_secondary_text_color="#000000", - button_secondary_text_color_dark="#ffffff", - slider_color="#76B900", - color_accent="#76B900", - color_accent_soft="#76B900", - body_text_color="#000000", - body_text_color_dark="#ffffff", - color_accent_soft_dark="#76B900", - border_color_accent="#ededed", - border_color_accent_dark="#3d3c3d", - block_title_text_color="#000000", - block_title_text_color_dark="#ffffff", -) diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/faviconV2.png b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/faviconV2.png deleted file mode 100644 index 18c2505fd..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/faviconV2.png and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/style.css b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/style.css deleted file mode 100644 index fd7e26889..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/css/style.css +++ /dev/null @@ -1,36 +0,0 @@ -@font-face { - font-family: "nvidia-sans"; - src: url("/file=../fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Bd.woff2") - format("woff2"); - font-style: bold; -} - -@font-face { - font-family: "nvidia-sans"; - src: url("/file=../fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Rg.woff2") - format("woff2"); - font-style: normal; -} - -@font-face { - font-family: "nvidia-sans"; - src: url("/file=../fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_It.woff2") - format("woff2"); - font-style: italic; -} - -.header { - padding: 60px; - text-align: center; - color: #76b900; - font-size: 30px; -} - -#chatbot { - flex-grow: 2; - overflow: auto; -} - -footer { - visibility: hidden; -} diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/NVIDIA Sans EULA.pdf b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/NVIDIA Sans EULA.pdf deleted file mode 100644 index 8c54f37d5..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/NVIDIA Sans EULA.pdf and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/NVIDIA Sans Partner Usage Summary.pdf b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/NVIDIA Sans Partner Usage Summary.pdf deleted file mode 100644 index ec8fab6a6..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/NVIDIA Sans Partner Usage Summary.pdf and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Bd.woff2 b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Bd.woff2 deleted file mode 100644 index c0fe1a6a7..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Bd.woff2 and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_It.woff2 b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_It.woff2 deleted file mode 100644 index f11a82b0f..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_It.woff2 and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Rg.woff2 b/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Rg.woff2 deleted file mode 100644 index ac9478555..000000000 Binary files a/industries/healthcare/agentic-healthcare-front-desk/ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Rg.woff2 and /dev/null differ diff --git a/industries/healthcare/agentic-healthcare-front-desk/utils/__init__.py b/industries/healthcare/agentic-healthcare-front-desk/utils/__init__.py deleted file mode 100644 index d50bcc445..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/utils/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: Apache-2.0 -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/utils/asr_utils.py b/industries/healthcare/agentic-healthcare-front-desk/utils/asr_utils.py deleted file mode 100644 index cbb9af793..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/utils/asr_utils.py +++ /dev/null @@ -1,170 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: Apache-2.0 -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import logging -import queue -from threading import Thread - -import gradio as gr -import grpc -import numpy as np -import pycountry -import riva.client -import riva.client.proto.riva_asr_pb2 as riva_asr -import riva.client.proto.riva_asr_pb2_grpc as rasr_srv - - -class ASRSession: - def __init__(self): - self.is_first_buffer = True - self.request_queue = None - self.response_stream = None - self.response_thread = None - self.transcript = "" - - -_LOGGER = logging.getLogger(__name__) - -# Obtain the ASR languages available on the Riva server -ASR_LANGS = dict() -grpc_auth = None - - -def asr_init(auth): - global ASR_LANGS - global grpc_auth - grpc_auth = auth - try: - _LOGGER.info("Available ASR languages") - asr_client = riva.client.ASRService(grpc_auth) - - config_response = asr_client.stub.GetRivaSpeechRecognitionConfig(riva_asr.RivaSpeechRecognitionConfigRequest()) - for model_config in config_response.model_config: - if model_config.parameters["type"] == "online" and model_config.parameters["streaming"]: - language_code = model_config.parameters['language_code'] - language_name = f"{pycountry.languages.get(alpha_2=language_code[:2]).name} ({language_code})" - _LOGGER.info(f"{language_name} {model_config.model_name}") - ASR_LANGS[language_name] = {"language_code": language_code, "model": model_config.model_name} - except grpc.RpcError as e: - _LOGGER.info(e.details()) - ASR_LANGS["No ASR languages available"] = "No ASR languages available" - gr.Info( - 'The app could not find any available ASR languages. Thus, none will appear in the "ASR Language" dropdown menu. Check that you are connected to a Riva server with ASR enabled.' - ) - _LOGGER.info( - 'The app could not find any available ASR languages. Thus, none will appear in the "ASR Language" dropdown menu. Check that you are connected to a Riva server with ASR enabled.' - ) - - ASR_LANGS = dict(sorted(ASR_LANGS.items())) - - -def print_streaming_response(asr_session): - asr_session.transcript = "" - final_transcript = "" - try: - for response in asr_session.response_stream: - final = "" - partial = "" - if not response.results: - continue - if len(response.results) > 0 and len(response.results[0].alternatives) > 0: - for result in response.results: - if result.is_final: - final += result.alternatives[0].transcript - else: - partial += result.alternatives[0].transcript - - final_transcript += final - asr_session.transcript = final_transcript + partial - - except grpc.RpcError as rpc_error: - print(rpc_error.details()) - # TODO See if Gradio popup error mechanism can be used. - # For now whow error via transcript text box. - asr_session.transcript = rpc_error.details() - return - - -def start_recording(audio, language, asr_session): - _LOGGER.info('start_recording') - asr_session.is_first_buffer = True - asr_session.request_queue = queue.Queue() - return "", asr_session - - -def stop_recording(asr_session): - _LOGGER.info('stop_recording') - try: - asr_session.request_queue.put(None) - asr_session.response_thread.join() - except: - pass - return asr_session - - -def transcribe_streaming(audio, language, asr_session): - _LOGGER.info('transcribe_streaming') - if language == 'No ASR languages available': - gr.Info( - 'The app cannot access ASR services. Any attempt to transcribe audio will be unsuccessful. Check that you are connected to a Riva server with ASR enabled.' - ) - _LOGGER.info( - 'The app cannot access ASR services. Any attempt to transcribe audio will be unsuccessful. Check that you are connected to a Riva server with ASR enabled.' - ) - return None, None - rate, data = audio - if len(data.shape) > 1: - data = np.mean(data, axis=1) - - if not len(data): - return asr_session.transcript, asr_session - - if asr_session.is_first_buffer: - - streaming_config = riva.client.StreamingRecognitionConfig( - config=riva.client.RecognitionConfig( - encoding=riva.client.AudioEncoding.LINEAR_PCM, - language_code=ASR_LANGS[language]['language_code'], - max_alternatives=1, - profanity_filter=False, - enable_automatic_punctuation=True, - verbatim_transcripts=False, - sample_rate_hertz=rate, - audio_channel_count=1, - enable_word_time_offsets=True, - model=ASR_LANGS[language]['model'], - ), - interim_results=True, - ) - - rasr_stub = rasr_srv.RivaSpeechRecognitionStub(grpc_auth.channel) - - asr_session.response_stream = rasr_stub.StreamingRecognize(iter(asr_session.request_queue.get, None)) - - # First buffer should contain only the config - request = riva_asr.StreamingRecognizeRequest(streaming_config=streaming_config) - asr_session.request_queue.put(request) - - asr_session.response_thread = Thread(target=print_streaming_response, args=(asr_session,)) - - # run the thread - asr_session.response_thread.start() - - asr_session.is_first_buffer = False - - request = riva_asr.StreamingRecognizeRequest(audio_content=data.astype(np.int16).tobytes()) - asr_session.request_queue.put(request) - - return asr_session.transcript, asr_session \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/utils/tts_utils.py b/industries/healthcare/agentic-healthcare-front-desk/utils/tts_utils.py deleted file mode 100644 index 7441ae1e3..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/utils/tts_utils.py +++ /dev/null @@ -1,133 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: Apache-2.0 -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json -import logging -import os -import time -from pathlib import Path -from threading import Thread -from typing import TYPE_CHECKING, Any, List - -import gradio as gr -import numpy as np -import pycountry -import riva.client -import riva.client.proto.riva_tts_pb2 as riva_tts - -_LOGGER = logging.getLogger(__name__) - -tts_sample_rate = int(os.getenv("TTS_SAMPLE_RATE", 48000)) - -# Obtain the TTS languages and voices available on the Riva server -TTS_MODELS = dict() -grpc_auth = None - - -def tts_init(auth): - global TTS_MODELS - global grpc_auth - grpc_auth = auth - try: - tts_client = riva.client.SpeechSynthesisService(auth) - config_response = tts_client.stub.GetRivaSynthesisConfig(riva_tts.RivaSynthesisConfigRequest()) - for model_config in config_response.model_config: - language_code = model_config.parameters['language_code'] - language_name = f"{pycountry.languages.get(alpha_2=language_code[:2]).name} ({language_code})" - voice_name = model_config.parameters['voice_name'] - subvoices = [voice.split(':')[0] for voice in model_config.parameters['subvoices'].split(',')] - full_voice_names = [voice_name + "." + subvoice for subvoice in subvoices] - - if language_name in TTS_MODELS: - TTS_MODELS[language_name]['voices'].extend(full_voice_names) - else: - TTS_MODELS[language_name] = {"language_code": language_code, "voices": full_voice_names} - - TTS_MODELS = dict(sorted(TTS_MODELS.items())) - - _LOGGER.info(json.dumps(TTS_MODELS, indent=4)) - except: - TTS_MODELS["No TTS languages available"] = "No TTS languages available" - gr.Info( - 'The app could not find any available TTS languages. Thus, none will appear in the "TTS Language" or "TTS Voice" dropdown menus. Check that you are connected to a Riva server with TTS enabled.' - ) - _LOGGER.info( - 'The app could not find any available TTS languages. Thus, none will appear in the "TTS Language" or "TTS Voice" dropdown menus. Check that you are connected to a Riva server with TTS enabled.' - ) - - -# Once the user selects a TTS language, narrow the options in the TTS voice -# dropdown menu accordingly -def update_voice_dropdown(language): - if language == "No TTS languages available": - voice_dropdown = gr.Dropdown(label="Voice", choices="No TTS voices available", value="No TTS voices available") - else: - voice_dropdown = gr.Dropdown( - label="Voice", choices=TTS_MODELS[language]['voices'], value=TTS_MODELS[language]['voices'][0] - ) - return voice_dropdown - - -def text_to_speech(text, language, voice): - - if language == "No TTS languages available": - gr.Info( - 'The app cannot access TTS services. Any attempt to synthesize audio will be unsuccessful. Check that you are connected to a Riva server with TTS enabled.' - ) - _LOGGER.info( - 'The app cannot access TTS services. Any attempt to synthesize audio will be unsuccessful. Check that you are connected to a Riva server with TTS enabled.' - ) - return None, gr.update(interactive=False) - if not text: - gr.Info('No text from which to synthesize a voice has been provided') - return None, gr.update(interactive=False) - if not voice: - gr.Info('No TTS voice or an invalid TTS voice has been selected') - return None, gr.update(interactive=False) - - - first_buffer = True - start_time = time.time() - - # TODO: Gradio Flagging doesn't work with streaming audio ouptut. - # See https://github.com/gradio-app/gradio/issues/5806 - # TODO: Audio download does not work with streaming audio output. - # See https://github.com/gradio-app/gradio/issues/6570 - - tts_client = riva.client.SpeechSynthesisService(grpc_auth) - - _LOGGER.info(f"Calling synthesize_online") - - # To manage the 400-character limit for Riva's text-to-speech (TTS), longer answers are segmented by adding 'full stops' at every 300 characters (300 instead of 400 to take in account phoneme expansion) - for i in range((len(text) // 300) + 1): - indx = text.rfind(' ', 0, (i + 1) * 300) - text = text[:indx] + ' . ' + text[indx:] - - response = tts_client.synthesize_online( - text=text, - voice_name=voice, - language_code=TTS_MODELS[language]['language_code'], - sample_rate_hz=tts_sample_rate, - ) - for result in response: - if len(result.audio): - if first_buffer: - _LOGGER.info(f"TTS request [{result.id.value}] first buffer latency: {time.time() - start_time} sec") - first_buffer = False - yield (tts_sample_rate, np.frombuffer(result.audio, dtype=np.int16)) - - _LOGGER.info(f"TTS request [{result.id.value}] last buffer latency: {time.time() - start_time} sec") - - yield (tts_sample_rate, np.frombuffer(b'', dtype=np.int16)) \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/utils/ui.py b/industries/healthcare/agentic-healthcare-front-desk/utils/ui.py deleted file mode 100644 index ea79775b0..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/utils/ui.py +++ /dev/null @@ -1,230 +0,0 @@ -from typing import Any, Dict, List, Tuple, Literal, Callable, Annotated, Literal, Optional -import uuid -import sys -import os -import logging -import riva.client -import gradio as gr -SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) -sys.path.append(os.path.dirname(SCRIPT_DIR)) -from utils import asr_utils, tts_utils -from ui_assets.css.css import css, theme - - - -def get_config_with_new_thread_id(): - thread_id = str(uuid.uuid4()) - config = { - "configurable": { - # Checkpoints are accessed by thread_id - "thread_id": thread_id, - } - } - return config - -def launch_demo_ui(assistant_graph, server_port, NVIDIA_API_KEY, RIVA_ASR_FUNCTION_ID, RIVA_TTS_FUNCTION_ID, RIVA_API_URI): - # Establish a connection to the Riva server - _LOGGER = logging.getLogger() - try: - use_ssl = False - metadata_asr = [] - metadata_tts = [] - if NVIDIA_API_KEY: - use_ssl = True - metadata_asr.append(("authorization", "Bearer " + NVIDIA_API_KEY)) - metadata_tts.append(("authorization", "Bearer " + NVIDIA_API_KEY)) - if RIVA_ASR_FUNCTION_ID: - use_ssl = True - metadata_asr.append(("function-id", RIVA_ASR_FUNCTION_ID)) - if RIVA_TTS_FUNCTION_ID: - use_ssl = True - metadata_tts.append(("function-id", RIVA_TTS_FUNCTION_ID)) - - auth_tts = riva.client.Auth(None, use_ssl=use_ssl, uri=RIVA_API_URI, metadata_args=metadata_tts) - auth_asr = riva.client.Auth(None, use_ssl=use_ssl, uri=RIVA_API_URI, metadata_args=metadata_asr) - _LOGGER.info('Created riva.client.Auth success') - except: - _LOGGER.info('Error creating riva.client.Auth') - - global config - config = get_config_with_new_thread_id() - - def _print_event(event: dict, _printed: set, max_length=1500): - return_print = "" - current_state = event.get("dialog_state") - if current_state: - print("Currently in: ", current_state[-1]) - return_print += "Currently in: " - return_print += current_state[-1] - return_print += "\n" - message = event.get("messages") - latest_msg_chatbot = "" - if message: - if isinstance(message, list): - message = message[-1] - if message.id not in _printed: - msg_repr = message.pretty_repr() - msg_repr_chatbot = str(message.content) - if len(msg_repr) > max_length: - msg_repr = msg_repr[:max_length] + " ... (truncated)" - msg_repr_chatbot = msg_repr_chatbot[:max_length] + " ... (truncated)" - return_print += msg_repr - latest_msg_chatbot = msg_repr_chatbot - print(msg_repr) - _printed.add(message.id) - return_print += "\n" - return return_print, latest_msg_chatbot - - def interact(query: str, chat_history: List[Tuple[str, str]], full_response: str): - _printed = set() - # example with a single tool call - events = assistant_graph.stream( - {"messages": ("user", query)}, config, stream_mode="values" - ) - - latest_response = "" - for event in events: - return_print, latest_msg = _print_event(event, _printed) - full_response += return_print - if latest_msg != "": - latest_response = latest_msg - - yield "", chat_history + [[query, latest_response]], full_response, latest_response - - def new_thread(): - global config - config = get_config_with_new_thread_id() - - # RIVA auth - asr_utils.asr_init(auth_asr) - tts_utils.tts_init(auth_tts) - - with gr.Blocks(title = "Welcome to the Healthcare Contact Center", theme=theme, css=css) as demo: - gr.Markdown("# Welcome to the Healthcare Contact Center") - - # session specific state across runs - state = gr.State(value=asr_utils.ASRSession()) - - - with gr.Row(equal_height=True): - chatbot = gr.Chatbot(label="Healthcare Contact Center Agent", elem_id="chatbot", show_copy_button=True) - latest_response = gr.Textbox(label = "Latest Response", visible=False) - full_response = gr.Textbox(label = "Full Response Log", visible=False, elem_id="fullresponsebox", lines=25) - - - - # input - with gr.Row(): - with gr.Column(scale = 10): - msg = gr.Textbox(label = "Input Query", show_label=True, placeholder="Enter text and press ENTER", container=False,) - #with gr.Row(): - audio_mic = gr.Audio( - sources=["microphone"], - type="numpy", - scale = 2, - streaming=True, - visible=True, - label="Transcribe Audio Query", - show_label=False, - container=False, - elem_id="microphone", - ) - # buttons - with gr.Row(): - submit_btn = gr.Button(value="Submit") - _ = gr.ClearButton([msg, chatbot], value="Clear UI") - full_resp_show = gr.Button(value="Show Full Response") - full_resp_hide = gr.Button(value="Hide Full Response", visible=False) - with gr.Row(): - new_thread_btn = gr.Button(value="Clear Chat Memory") - - # RIVA dropdowns - with gr.Accordion("ASR and TTS Settings"): - with gr.Row(): - asr_language_list = list(asr_utils.ASR_LANGS) - asr_language_dropdown = gr.components.Dropdown( - label="ASR Language", choices=asr_language_list, value=asr_language_list[0], - ) - tts_language_list = list(tts_utils.TTS_MODELS) - tts_language_dropdown = gr.components.Dropdown( - label="TTS Language", choices=tts_language_list, value=tts_language_list[0], - ) - all_voices = [] - try: - for model in tts_utils.TTS_MODELS: - all_voices.extend(tts_utils.TTS_MODELS[model]['voices']) - default_voice = tts_utils.TTS_MODELS[tts_language_list[0]]['voices'][0] - except: - all_voices.append("No TTS voices available") - default_voice = "No TTS voices available" - tts_voice_dropdown = gr.components.Dropdown( - label="TTS Voice", choices=all_voices, value=default_voice, - ) - - # TTS output box - # visible so that users can stop or replay playback - with gr.Row(): - output_audio = gr.Audio( - label="Synthesized Speech", - autoplay=True, - interactive=False, - streaming=True, - visible=True, - show_download_button=False, - ) - - - - # hide/show context - def _toggle_full_response(btn: str) -> Dict[gr.component, Dict[Any, Any]]: - if btn == "Show Full Response": - out = [True, False, True] - if btn == "Hide Full Response": - out = [False, True, False] - return { - full_response: gr.update(visible=out[0]), - full_resp_show: gr.update(visible=out[1]), - full_resp_hide: gr.update(visible=out[2]), - } - - full_resp_show.click(_toggle_full_response, [full_resp_show], [full_response, full_resp_show, full_resp_hide]) - full_resp_hide.click(_toggle_full_response, [full_resp_hide], [full_response, full_resp_show, full_resp_hide]) - - msg.submit(interact, [msg, chatbot, full_response], [msg, chatbot, full_response, latest_response]) - submit_btn.click(interact, [msg, chatbot, full_response], [msg, chatbot, full_response, latest_response]) - - new_thread_btn.click(new_thread, [],[]) - - tts_language_dropdown.change( - tts_utils.update_voice_dropdown, [tts_language_dropdown], [tts_voice_dropdown], api_name=False - ) - - audio_mic.start_recording( - asr_utils.start_recording, [audio_mic, asr_language_dropdown, state], [msg, state], api_name=False, - ) - audio_mic.stop_recording(asr_utils.stop_recording, [state], [state], api_name=False) - audio_mic.stream( - asr_utils.transcribe_streaming, [audio_mic, asr_language_dropdown, state], [msg, state], api_name=False - ) - audio_mic.clear(lambda: "", [], [msg], api_name=False) - - - latest_response.change( - tts_utils.text_to_speech, - [latest_response, tts_language_dropdown, tts_voice_dropdown], - [output_audio], - api_name=False, - ) - - # TODO: how to stop the audio - - demo.queue().launch( - server_name="0.0.0.0", - server_port=server_port, - favicon_path="ui_assets/css/faviconV2.png", - allowed_paths=[ - "ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Rg.woff2", - "ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_Bd.woff2", - "ui_assets/fonts/NVIDIASansWebWOFFFontFiles/WOFF2/NVIDIASans_W_It.woff2", - ] - ) \ No newline at end of file diff --git a/industries/healthcare/agentic-healthcare-front-desk/vars.env b/industries/healthcare/agentic-healthcare-front-desk/vars.env deleted file mode 100644 index c15a51ba8..000000000 --- a/industries/healthcare/agentic-healthcare-front-desk/vars.env +++ /dev/null @@ -1,7 +0,0 @@ -NVIDIA_API_KEY="nvapi-" -TAVILY_API_KEY="tvly-" -RIVA_API_URI=grpc.nvcf.nvidia.com:443 -RIVA_ASR_FUNCTION_ID=1598d209-5e27-4d3c-8079-4751568b1081 -RIVA_TTS_FUNCTION_ID=0149dedb-2be8-4195-b9a0-e57e0e14f972 -BASE_URL="https://integrate.api.nvidia.com/v1" -LLM_MODEL="meta/llama-3.3-70b-instruct" \ No newline at end of file