|
| 1 | +--- |
| 2 | +layout: docs |
| 3 | +title: Docs - AppMap Navie |
| 4 | +description: "Reference Guide to AppMap Navie AI, examples of bring-your-own-llm configurations." |
| 5 | +name: Bring Your Own Model Examples |
| 6 | +navie-reference: true |
| 7 | +toc: true |
| 8 | +step: 5 |
| 9 | +--- |
| 10 | + |
| 11 | +# Bring Your Own Model Examples |
| 12 | + |
| 13 | +## GitHub Copilot Language Model |
| 14 | + |
| 15 | +Starting with VS Code `1.91` and greater, and with an active GitHub Copilot subscription, you can use Navie with the Copilot Language Model as a supported backend model. This allows you to leverage the powerful runtime powered Navie AI Architect with your existing Copilot subscription. This is the recommended option for users in corporate environments where Copilot is the only approved and supported language model. |
| 16 | + |
| 17 | +#### Requirements <!-- omit in toc --> |
| 18 | + |
| 19 | +The following items are required to use the GitHub Copilot Language Model with Navie: |
| 20 | + |
| 21 | +- VS Code Version `1.91` or greater |
| 22 | +- AppMap Extension version `v0.123.0` or greater |
| 23 | +- GitHub Copilot VS Code extension must be installed |
| 24 | +- Signed into an active paid or trial GitHub Copilot subscription |
| 25 | + |
| 26 | +#### Setup <!-- omit in toc --> |
| 27 | + |
| 28 | +Open the VS Code Settings, and search for `navie vscode` |
| 29 | + |
| 30 | +<img class="video-screenshot" src="/assets/img/product/navie-copilot-1.webp"/> |
| 31 | + |
| 32 | +Click the box to use the `VS Code language model...` |
| 33 | + |
| 34 | +After clicking the box to enable the VS Code LM, you'll be instructed to reload your VS Code to enable these changes. |
| 35 | + |
| 36 | +<img class="video-screenshot" src="/assets/img/product/navie-copilot-2.webp"/> |
| 37 | + |
| 38 | +After VS Code finishes reloading, open the AppMap extension. |
| 39 | + |
| 40 | +Select `New Navie Chat`, and confirm the model listed is `(via copilot)` |
| 41 | + |
| 42 | +<img class="video-screenshot" src="/assets/img/product/navie-copilot-3.webp"/> |
| 43 | + |
| 44 | +You'll need to allow the AppMap extension access to the Copilot Language Models. After asking your first question to Navie, click `Allow` to the popup to allow the necessary access. |
| 45 | + |
| 46 | +<img class="video-screenshot" src="/assets/img/product/navie-copilot-4.webp"/> |
| 47 | + |
| 48 | +#### Troubleshooting <!-- omit in toc --> |
| 49 | + |
| 50 | +If you attempt to enable the Copilot language models without the Copilot Extension installed, you'll see the following error in your code editor. |
| 51 | + |
| 52 | +<img class="video-screenshot" src="/assets/img/product/navie-copilot-5.webp"/> |
| 53 | + |
| 54 | +Click `Install Copilot` to complete the installation for language model support. |
| 55 | + |
| 56 | +If you have the Copilot extension installed, but have not signed in, you'll see the following notice. |
| 57 | + |
| 58 | +<img class="video-screenshot" src="/assets/img/product/navie-copilot-6.webp"/> |
| 59 | + |
| 60 | +Click the `Sign in to GitHub` and login with an account that has a valid paid or trial GitHub Copilot subscription. |
| 61 | + |
| 62 | +#### Video Demo <!-- omit in toc --> |
| 63 | + |
| 64 | +{% include vimeo.html id='992238965' %} |
| 65 | + |
| 66 | +## OpenAI |
| 67 | + |
| 68 | +**Note:** We recommend configuring your OpenAI key using the code editor extension. Follow the [Bring Your Own Key](/docs/using-navie-ai/bring-your-own-model.html#configuring-your-openai-key) docs for instructions. |
| 69 | + |
| 70 | +Only `OPENAI_API_KEY` needs to be set, other settings can stay default: |
| 71 | + |
| 72 | +| `OPENAI_API_KEY`| `sk-9spQsnE3X7myFHnjgNKKgIcGAdaIG78I3HZB4DFDWQGM` | |
| 73 | + |
| 74 | +When using your own OpenAI API key, you can also modify the OpenAI model for Navie to use. For example if you wanted to use `gpt-3.5` or use an preview model like `gpt-4-vision-preview`. |
| 75 | + |
| 76 | +| `APPMAP_NAVIE_MODEL`| `gpt-4-vision-preview` | |
| 77 | + |
| 78 | +### Anthropic (Claude) |
| 79 | + |
| 80 | +AppMap supports the Anthropic suite of large language models such as Claude Sonnet or Claude Opus. |
| 81 | + |
| 82 | +To use AppMap Navie with Anthropic LLMs you need to generate an API key for your account. |
| 83 | + |
| 84 | +Login to your [Anthropic dashboard](https://console.anthropic.com/dashboard), and choose the option to "Get API Keys" |
| 85 | + |
| 86 | +Click the box to "Create Key" |
| 87 | + |
| 88 | + |
| 89 | + |
| 90 | +In the next box, give your key an easy to recognize name. |
| 91 | + |
| 92 | + |
| 93 | + |
| 94 | +In your VS Code or JetBrains editor, configure the following environment variables. For more details on configuring |
| 95 | +these environment variables in your VS Code or JetBrains editor, refer to the [AppMap BOYK documentation.](/docs/using-navie-ai/bring-your-own-model.html#configuration) |
| 96 | + |
| 97 | +| `ANTHROPIC_API_KEY`| `sk-ant-api03-8SgtgQrGB0vTSsB_DeeIZHvDrfmrg` | |
| 98 | +| `APPMAP_NAVIE_MODEL`| `claude-3-5-sonnet-20240620` | |
| 99 | + |
| 100 | + |
| 101 | +When setting the `APPMAP_NAVIE_MODEL` refer to the [Anthropic documentation](https://docs.anthropic.com/en/docs/intro-to-claude#model-options) for the latest available models to chose from. |
| 102 | + |
| 103 | +#### Video Demo <!-- omit in toc --> |
| 104 | + |
| 105 | +{% include vimeo.html id='1003330117' %} |
| 106 | + |
| 107 | +## Azure OpenAI |
| 108 | + |
| 109 | +Assuming you [created](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/create-resource) a `navie` GPT-4 deployment on `contoso.openai.azure.com` OpenAI instance: |
| 110 | + |
| 111 | +| `AZURE_OPENAI_API_KEY` | `e50edc22e83f01802893d654c4268c4f` | |
| 112 | +| `AZURE_OPENAI_API_VERSION` | `2024-02-01` | |
| 113 | +| `AZURE_OPENAI_API_INSTANCE_NAME` | `contoso` | |
| 114 | +| `AZURE_OPENAI_API_DEPLOYMENT_NAME` | `navie` | |
| 115 | + |
| 116 | +## AnyScale Endpoints |
| 117 | + |
| 118 | +[AnyScale Endpoints](https://www.anyscale.com/endpoints) allows querying a |
| 119 | +selection of open-source LLMs. After you create an account you can use it by |
| 120 | +setting: |
| 121 | + |
| 122 | +| `OPENAI_API_KEY` | `esecret_myxfwgl1iinbz9q5hkexemk8f4xhcou8` | |
| 123 | +| `OPENAI_BASE_URL` | `https://api.endpoints.anyscale.com/v1` | |
| 124 | +| `APPMAP_NAVIE_MODEL` | `mistralai/Mixtral-8x7B-Instruct-v0.1` | |
| 125 | + |
| 126 | +Consult [AnyScale documentation](https://docs.endpoints.anyscale.com/) for model |
| 127 | +names. Note we recommend using Mixtral models with Navie. |
| 128 | + |
| 129 | +#### Anyscale Demo with VS Code <!-- omit in toc --> |
| 130 | + |
| 131 | +{% include vimeo.html id='970914908' %} |
| 132 | + |
| 133 | +#### Anyscale Demo with JetBrains <!-- omit in toc --> |
| 134 | + |
| 135 | +{% include vimeo.html id='970914884' %} |
| 136 | + |
| 137 | +## Fireworks AI |
| 138 | + |
| 139 | +You can use [Fireworks AI](https://fireworks.ai/) and their serverless or on-demand |
| 140 | +models as a compatible backend for AppMap Navie AI. |
| 141 | + |
| 142 | +After creating an account on Fireworks AI you can configure your Navie environment |
| 143 | +settings: |
| 144 | + |
| 145 | +| `OPENAI_API_KEY` | `WBYq2mKlK8I16ha21k233k2EwzGAJy3e0CLmtNZadJ6byfpu7c` | |
| 146 | +| `OPENAI_BASE_URL` | `https://api.fireworks.ai/inference/v1` | |
| 147 | +| `APPMAP_NAVIE_MODEL` | `accounts/fireworks/models/mixtral-8x22b-instruct` | |
| 148 | + |
| 149 | +Consult the [Fireworks AI documentation](https://fireworks.ai/models) for a full list of |
| 150 | +the available models they currently support. |
| 151 | + |
| 152 | +#### Video Demo <!-- omit in toc --> |
| 153 | + |
| 154 | +{% include vimeo.html id='992941358' %} |
| 155 | + |
| 156 | +### Ollama |
| 157 | + |
| 158 | +You can use [Ollama](https://ollama.com/) to run Navie with local models; after |
| 159 | +you've successfully ran a model with `ollama run` command, you can configure |
| 160 | +Navie to use it: |
| 161 | + |
| 162 | +| `OPENAI_API_KEY` | `dummy` | |
| 163 | +| `OPENAI_BASE_URL` | `http://127.0.0.1:11434/v1` | |
| 164 | +| `APPMAP_NAVIE_MODEL` | `mixtral` | |
| 165 | + |
| 166 | +**Note:** Even though it's running locally a dummy placeholder API key is still required. |
| 167 | + |
| 168 | +## LM Studio |
| 169 | + |
| 170 | +You can use [LM Studio](https://lmstudio.ai/) to run Navie with local models. |
| 171 | + |
| 172 | +After downloading a model to run, select the option to run a local server. |
| 173 | + |
| 174 | +<img class="video-screenshot" src="/assets/img/product/lmstudio-run-local-server.webp"/> |
| 175 | + |
| 176 | +In the next window, select which model you want to load into the local inference server. |
| 177 | + |
| 178 | +<img class="video-screenshot" src="/assets/img/product/lmstudio-load-model.webp"/> |
| 179 | + |
| 180 | +After loading your model, you can confirm it's successfully running in the logs. |
| 181 | + |
| 182 | +*NOTE*: Save the URL it's running under to use for `OPENAI_BASE_URL` environment variable. |
| 183 | + |
| 184 | +For example: `http://localhost:1234/v1` |
| 185 | + |
| 186 | +<img class="video-screenshot" src="/assets/img/product/lmstudio-confirm-running.webp"/> |
| 187 | + |
| 188 | +In the `Model Inspector` copy the name of the model and use this for the `APPMAP_NAVIE_MODEL` environment variable. |
| 189 | + |
| 190 | +For example: `Meta-Llama-3-8B-Instruct-imatrix` |
| 191 | + |
| 192 | +<img class="video-screenshot" src="/assets/img/product/lmstudio-model-inspector.webp"/> |
| 193 | + |
| 194 | +Continue to configure your local environment with the following environment variables based on your LM Studio configuration. Refer to the [documentation above](#bring-your-own-model-byom) for steps specific to your code editor. |
| 195 | + |
| 196 | +| `OPENAI_API_KEY` | `dummy` | |
| 197 | +| `OPENAI_BASE_URL` | `http://localhost:1234/v1` | |
| 198 | +| `APPMAP_NAVIE_MODEL` | `Meta-Llama-3-8B-Instruct-imatrix` | |
| 199 | + |
| 200 | +**Note:** Even though it's running locally a dummy placeholder API key is still required. |
| 201 | + |
| 202 | +{% include vimeo.html id='969002308' %} |
0 commit comments